Test Report: Docker_Linux_crio_arm64 17907

                    
                      7ea9a0daea14a922bd9e219098252b67b1b782a8:2024-01-08:32610
                    
                

Test fail (6/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 167.77
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 212.31
217 TestMultiNode/serial/PingHostFrom2Pods 3.91
239 TestRunningBinaryUpgrade 76.32
242 TestMissingContainerUpgrade 188.28
254 TestStoppedBinaryUpgrade/Upgrade 105.01
x
+
TestAddons/parallel/Ingress (167.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-888287 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-888287 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-888287 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b2345bd9-b801-4f89-b95b-d5894feb7eeb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b2345bd9-b801-4f89-b95b-d5894feb7eeb] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003924968s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-888287 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.079987705s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-888287 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.047527775s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-888287 addons disable ingress --alsologtostderr -v=1: (7.768988466s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-888287
helpers_test.go:235: (dbg) docker inspect addons-888287:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67",
	        "Created": "2024-01-08T20:10:50.11977239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 639765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:10:50.483577057Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67/hosts",
	        "LogPath": "/var/lib/docker/containers/6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67/6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67-json.log",
	        "Name": "/addons-888287",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-888287:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-888287",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f16b0c5c940b3df6eee4d48962ba85cb6595e0abe2ce47e7eaafdc219f0aed-init/diff:/var/lib/docker/overlay2/6dc70d5fd4ec367ecfc7dc99fc7bcaf35d9752c3024a41d78b490451f211e3b4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f16b0c5c940b3df6eee4d48962ba85cb6595e0abe2ce47e7eaafdc219f0aed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f16b0c5c940b3df6eee4d48962ba85cb6595e0abe2ce47e7eaafdc219f0aed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f16b0c5c940b3df6eee4d48962ba85cb6595e0abe2ce47e7eaafdc219f0aed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-888287",
	                "Source": "/var/lib/docker/volumes/addons-888287/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-888287",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-888287",
	                "name.minikube.sigs.k8s.io": "addons-888287",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e9fd8919e24941ef3572cc9c92fe685d9b10e028975fc842a885e5a37acad1d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1e9fd8919e24",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-888287": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f990fac2af1",
	                        "addons-888287"
	                    ],
	                    "NetworkID": "afaa4e47158599746892fa08dc68baf9ef3b242d04e27c07cb90ba220e7e8f01",
	                    "EndpointID": "638297d3663ce3f8b507942392ae36a51eba823aeec0b5e388deff3b24d084c3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-888287 -n addons-888287
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-888287 logs -n 25: (1.623619853s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| delete  | -p download-only-031263                                                                     | download-only-031263   | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| delete  | -p download-only-031263                                                                     | download-only-031263   | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| start   | --download-only -p                                                                          | download-docker-824222 | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | download-docker-824222                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-824222                                                                   | download-docker-824222 | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-174483   | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | binary-mirror-174483                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43009                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-174483                                                                     | binary-mirror-174483   | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:10 UTC |
	| addons  | enable dashboard -p                                                                         | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | addons-888287                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |                     |
	|         | addons-888287                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-888287 --wait=true                                                                | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC | 08 Jan 24 20:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | -p addons-888287                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-888287 ip                                                                            | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	| addons  | addons-888287 addons disable                                                                | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | -p addons-888287                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | addons-888287                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-888287 ssh cat                                                                       | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | /opt/local-path-provisioner/pvc-eac25f32-f886-438b-b976-f4205af199ef_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-888287 addons disable                                                                | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | addons-888287                                                                               |                        |         |         |                     |                     |
	| addons  | addons-888287 addons                                                                        | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-888287 addons                                                                        | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-888287 addons                                                                        | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-888287 ssh curl -s                                                                   | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-888287 ip                                                                            | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	| addons  | addons-888287 addons disable                                                                | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-888287 addons disable                                                                | addons-888287          | jenkins | v1.32.0 | 08 Jan 24 20:17 UTC | 08 Jan 24 20:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:10:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:10:26.165259  639301 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:10:26.165385  639301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:26.165393  639301 out.go:309] Setting ErrFile to fd 2...
	I0108 20:10:26.165399  639301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:26.165657  639301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:10:26.166120  639301 out.go:303] Setting JSON to false
	I0108 20:10:26.167060  639301 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10369,"bootTime":1704734258,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:10:26.167135  639301 start.go:138] virtualization:  
	I0108 20:10:26.169678  639301 out.go:177] * [addons-888287] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:10:26.172381  639301 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:10:26.174561  639301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:10:26.172498  639301 notify.go:220] Checking for updates...
	I0108 20:10:26.177092  639301 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:10:26.179444  639301 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:10:26.181376  639301 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:10:26.183540  639301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:10:26.186003  639301 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:10:26.209163  639301 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:10:26.209300  639301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:10:26.292671  639301 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:10:26.281983781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:10:26.292784  639301 docker.go:295] overlay module found
	I0108 20:10:26.296745  639301 out.go:177] * Using the docker driver based on user configuration
	I0108 20:10:26.298670  639301 start.go:298] selected driver: docker
	I0108 20:10:26.298692  639301 start.go:902] validating driver "docker" against <nil>
	I0108 20:10:26.298707  639301 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:10:26.299350  639301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:10:26.365053  639301 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:10:26.356123948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:10:26.365207  639301 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:10:26.365514  639301 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:10:26.367783  639301 out.go:177] * Using Docker driver with root privileges
	I0108 20:10:26.369830  639301 cni.go:84] Creating CNI manager for ""
	I0108 20:10:26.369851  639301 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:10:26.369865  639301 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:10:26.369881  639301 start_flags.go:323] config:
	{Name:addons-888287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-888287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:26.372549  639301 out.go:177] * Starting control plane node addons-888287 in cluster addons-888287
	I0108 20:10:26.374644  639301 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:10:26.376968  639301 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:10:26.379170  639301 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:10:26.379227  639301 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0108 20:10:26.379239  639301 cache.go:56] Caching tarball of preloaded images
	I0108 20:10:26.379263  639301 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:10:26.379326  639301 preload.go:174] Found /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0108 20:10:26.379337  639301 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:10:26.379696  639301 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/config.json ...
	I0108 20:10:26.379726  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/config.json: {Name:mk4cf2d358272b6d1b235047d1d93678c16bcb28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:26.395648  639301 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:10:26.395795  639301 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:10:26.395819  639301 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:10:26.395828  639301 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:10:26.395837  639301 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:10:26.395846  639301 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from local cache
	I0108 20:10:41.951966  639301 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from cached tarball
	I0108 20:10:41.952007  639301 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:10:41.952057  639301 start.go:365] acquiring machines lock for addons-888287: {Name:mkf3eaaa78ea9460e710bc954b3d195437674b34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:10:41.952186  639301 start.go:369] acquired machines lock for "addons-888287" in 106.995µs
	I0108 20:10:41.952215  639301 start.go:93] Provisioning new machine with config: &{Name:addons-888287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-888287 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:10:41.952296  639301 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:10:41.955186  639301 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0108 20:10:41.955442  639301 start.go:159] libmachine.API.Create for "addons-888287" (driver="docker")
	I0108 20:10:41.955475  639301 client.go:168] LocalClient.Create starting
	I0108 20:10:41.955576  639301 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem
	I0108 20:10:42.910371  639301 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem
	I0108 20:10:43.490836  639301 cli_runner.go:164] Run: docker network inspect addons-888287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:10:43.507698  639301 cli_runner.go:211] docker network inspect addons-888287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:10:43.507786  639301 network_create.go:281] running [docker network inspect addons-888287] to gather additional debugging logs...
	I0108 20:10:43.507809  639301 cli_runner.go:164] Run: docker network inspect addons-888287
	W0108 20:10:43.524886  639301 cli_runner.go:211] docker network inspect addons-888287 returned with exit code 1
	I0108 20:10:43.524917  639301 network_create.go:284] error running [docker network inspect addons-888287]: docker network inspect addons-888287: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-888287 not found
	I0108 20:10:43.524931  639301 network_create.go:286] output of [docker network inspect addons-888287]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-888287 not found
	
	** /stderr **
	I0108 20:10:43.525030  639301 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:10:43.542896  639301 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002535080}
	I0108 20:10:43.542938  639301 network_create.go:124] attempt to create docker network addons-888287 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 20:10:43.542995  639301 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-888287 addons-888287
	I0108 20:10:43.614851  639301 network_create.go:108] docker network addons-888287 192.168.49.0/24 created
	I0108 20:10:43.614883  639301 kic.go:121] calculated static IP "192.168.49.2" for the "addons-888287" container
	I0108 20:10:43.614960  639301 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:10:43.631393  639301 cli_runner.go:164] Run: docker volume create addons-888287 --label name.minikube.sigs.k8s.io=addons-888287 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:10:43.652679  639301 oci.go:103] Successfully created a docker volume addons-888287
	I0108 20:10:43.652764  639301 cli_runner.go:164] Run: docker run --rm --name addons-888287-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-888287 --entrypoint /usr/bin/test -v addons-888287:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:10:45.817974  639301 cli_runner.go:217] Completed: docker run --rm --name addons-888287-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-888287 --entrypoint /usr/bin/test -v addons-888287:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (2.165173134s)
	I0108 20:10:45.818012  639301 oci.go:107] Successfully prepared a docker volume addons-888287
	I0108 20:10:45.818035  639301 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:10:45.818054  639301 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:10:45.818142  639301 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-888287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:10:50.033436  639301 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-888287:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.215241126s)
	I0108 20:10:50.033470  639301 kic.go:203] duration metric: took 4.215413 seconds to extract preloaded images to volume
	W0108 20:10:50.033617  639301 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:10:50.033736  639301 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:10:50.103442  639301 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-888287 --name addons-888287 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-888287 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-888287 --network addons-888287 --ip 192.168.49.2 --volume addons-888287:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:10:50.491798  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Running}}
	I0108 20:10:50.513996  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:10:50.536934  639301 cli_runner.go:164] Run: docker exec addons-888287 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:10:50.615831  639301 oci.go:144] the created container "addons-888287" has a running status.
	I0108 20:10:50.615864  639301 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa...
	I0108 20:10:51.606325  639301 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:10:51.629265  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:10:51.648472  639301 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:10:51.648496  639301 kic_runner.go:114] Args: [docker exec --privileged addons-888287 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:10:51.714558  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:10:51.736394  639301 machine.go:88] provisioning docker machine ...
	I0108 20:10:51.736428  639301 ubuntu.go:169] provisioning hostname "addons-888287"
	I0108 20:10:51.736497  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:10:51.756415  639301 main.go:141] libmachine: Using SSH client type: native
	I0108 20:10:51.756881  639301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33404 <nil> <nil>}
	I0108 20:10:51.756900  639301 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-888287 && echo "addons-888287" | sudo tee /etc/hostname
	I0108 20:10:51.908486  639301 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-888287
	
	I0108 20:10:51.908567  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:10:51.926532  639301 main.go:141] libmachine: Using SSH client type: native
	I0108 20:10:51.926954  639301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33404 <nil> <nil>}
	I0108 20:10:51.926979  639301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-888287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-888287/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-888287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:10:52.063526  639301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:10:52.063564  639301 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-633350/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-633350/.minikube}
	I0108 20:10:52.063586  639301 ubuntu.go:177] setting up certificates
	I0108 20:10:52.063594  639301 provision.go:83] configureAuth start
	I0108 20:10:52.063660  639301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-888287
	I0108 20:10:52.083004  639301 provision.go:138] copyHostCerts
	I0108 20:10:52.083086  639301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem (1082 bytes)
	I0108 20:10:52.083216  639301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem (1123 bytes)
	I0108 20:10:52.083278  639301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem (1679 bytes)
	I0108 20:10:52.083328  639301 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem org=jenkins.addons-888287 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-888287]
	I0108 20:10:52.798010  639301 provision.go:172] copyRemoteCerts
	I0108 20:10:52.798087  639301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:10:52.798129  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:10:52.816264  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:10:52.917098  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:10:52.945323  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 20:10:52.973483  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:10:53.001801  639301 provision.go:86] duration metric: configureAuth took 938.192941ms
	I0108 20:10:53.001828  639301 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:10:53.002022  639301 config.go:182] Loaded profile config "addons-888287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:10:53.002141  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:10:53.023981  639301 main.go:141] libmachine: Using SSH client type: native
	I0108 20:10:53.024403  639301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33404 <nil> <nil>}
	I0108 20:10:53.024424  639301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:10:53.278174  639301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:10:53.278194  639301 machine.go:91] provisioned docker machine in 1.541778074s
	I0108 20:10:53.278204  639301 client.go:171] LocalClient.Create took 11.322723404s
	I0108 20:10:53.278216  639301 start.go:167] duration metric: libmachine.API.Create for "addons-888287" took 11.322775301s
	I0108 20:10:53.278223  639301 start.go:300] post-start starting for "addons-888287" (driver="docker")
	I0108 20:10:53.278232  639301 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:10:53.278294  639301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:10:53.278332  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:10:53.297733  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:10:53.397059  639301 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:10:53.401009  639301 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:10:53.401041  639301 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:10:53.401053  639301 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:10:53.401060  639301 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:10:53.401070  639301 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/addons for local assets ...
	I0108 20:10:53.401135  639301 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/files for local assets ...
	I0108 20:10:53.401157  639301 start.go:303] post-start completed in 122.927921ms
	I0108 20:10:53.401486  639301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-888287
	I0108 20:10:53.419067  639301 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/config.json ...
	I0108 20:10:53.419354  639301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:10:53.419405  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:10:53.436328  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:10:53.532216  639301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:10:53.537555  639301 start.go:128] duration metric: createHost completed in 11.585244371s
	I0108 20:10:53.537577  639301 start.go:83] releasing machines lock for "addons-888287", held for 11.585379798s
	I0108 20:10:53.537649  639301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-888287
	I0108 20:10:53.554086  639301 ssh_runner.go:195] Run: cat /version.json
	I0108 20:10:53.554111  639301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:10:53.554152  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:10:53.554178  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:10:53.578432  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:10:53.585563  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:10:53.805450  639301 ssh_runner.go:195] Run: systemctl --version
	I0108 20:10:53.810796  639301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:10:53.958165  639301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:10:53.963890  639301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:10:53.997383  639301 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:10:53.997477  639301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:10:54.041929  639301 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:10:54.041953  639301 start.go:475] detecting cgroup driver to use...
	I0108 20:10:54.041997  639301 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:10:54.042081  639301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:10:54.062386  639301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:10:54.076072  639301 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:10:54.076158  639301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:10:54.092719  639301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:10:54.110356  639301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:10:54.212157  639301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:10:54.316065  639301 docker.go:233] disabling docker service ...
	I0108 20:10:54.316138  639301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:10:54.338009  639301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:10:54.351732  639301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:10:54.450805  639301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:10:54.562794  639301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:10:54.576021  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:10:54.596057  639301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:10:54.596164  639301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:10:54.607837  639301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:10:54.607936  639301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:10:54.619842  639301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:10:54.631230  639301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:10:54.642697  639301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:10:54.654455  639301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:10:54.664378  639301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:10:54.674015  639301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:10:54.776175  639301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:10:54.898883  639301 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:10:54.898969  639301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:10:54.903626  639301 start.go:543] Will wait 60s for crictl version
	I0108 20:10:54.903748  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:10:54.908047  639301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:10:54.950137  639301 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 20:10:54.950240  639301 ssh_runner.go:195] Run: crio --version
	I0108 20:10:54.993837  639301 ssh_runner.go:195] Run: crio --version
	I0108 20:10:55.045535  639301 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 20:10:55.048011  639301 cli_runner.go:164] Run: docker network inspect addons-888287 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:10:55.066361  639301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:10:55.071072  639301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:10:55.085582  639301 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:10:55.085657  639301 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:10:55.157766  639301 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:10:55.157790  639301 crio.go:415] Images already preloaded, skipping extraction
	I0108 20:10:55.157854  639301 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:10:55.200662  639301 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:10:55.200685  639301 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:10:55.200759  639301 ssh_runner.go:195] Run: crio config
	I0108 20:10:55.256636  639301 cni.go:84] Creating CNI manager for ""
	I0108 20:10:55.256661  639301 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:10:55.256713  639301 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:10:55.256741  639301 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-888287 NodeName:addons-888287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:10:55.256940  639301 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-888287"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:10:55.257035  639301 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-888287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-888287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:10:55.257125  639301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:10:55.267644  639301 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:10:55.267740  639301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:10:55.278078  639301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0108 20:10:55.299269  639301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:10:55.320759  639301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0108 20:10:55.342610  639301 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:10:55.347296  639301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:10:55.360788  639301 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287 for IP: 192.168.49.2
	I0108 20:10:55.360821  639301 certs.go:190] acquiring lock for shared ca certs: {Name:mk28124a9f2c671691fce8a4307fb3ec09e97812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:55.361466  639301 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key
	I0108 20:10:55.989481  639301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt ...
	I0108 20:10:55.989511  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt: {Name:mk67e797c5491c11abce728a3a00c83827140c67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:55.989709  639301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key ...
	I0108 20:10:55.989731  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key: {Name:mkdb467c82614047390d843c2af44b44bccb5ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:55.990397  639301 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key
	I0108 20:10:56.853705  639301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt ...
	I0108 20:10:56.853735  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt: {Name:mk01cf3f2fb8a904a885604181eafe21f72025e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:56.853918  639301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key ...
	I0108 20:10:56.853931  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key: {Name:mk73f9d652a4f2727945606b48cb8b62060cd4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:56.854045  639301 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.key
	I0108 20:10:56.854062  639301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt with IP's: []
	I0108 20:10:57.529575  639301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt ...
	I0108 20:10:57.529607  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: {Name:mkdde4b5a1cc89ebb9ae16385372bdd887ae5916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:57.529795  639301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.key ...
	I0108 20:10:57.529808  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.key: {Name:mk0afb92306d54b0a076251632e908f2814e75fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:57.529889  639301 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.key.dd3b5fb2
	I0108 20:10:57.529913  639301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:10:58.144526  639301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.crt.dd3b5fb2 ...
	I0108 20:10:58.144555  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.crt.dd3b5fb2: {Name:mk2840262ed8a0de9f7c83be16004d8810438510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:58.144732  639301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.key.dd3b5fb2 ...
	I0108 20:10:58.144748  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.key.dd3b5fb2: {Name:mkea64f46b067c2b3150ef0f0beadd53ce9e579d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:58.144826  639301 certs.go:337] copying /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.crt
	I0108 20:10:58.144901  639301 certs.go:341] copying /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.key
	I0108 20:10:58.144956  639301 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/proxy-client.key
	I0108 20:10:58.144975  639301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/proxy-client.crt with IP's: []
	I0108 20:10:58.692935  639301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/proxy-client.crt ...
	I0108 20:10:58.692969  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/proxy-client.crt: {Name:mkffacd20c4b19a805e228119d4bf69f61e04359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:58.693143  639301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/proxy-client.key ...
	I0108 20:10:58.693164  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/proxy-client.key: {Name:mk215076d941a32cab7b0d5f7f549e27675c486e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:58.693350  639301 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:10:58.693396  639301 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:10:58.693427  639301 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:10:58.693458  639301 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem (1679 bytes)
	I0108 20:10:58.694054  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:10:58.722692  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:10:58.750623  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:10:58.779043  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:10:58.807378  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:10:58.834925  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 20:10:58.865865  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:10:58.896100  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:10:58.923461  639301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:10:58.951632  639301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:10:58.972553  639301 ssh_runner.go:195] Run: openssl version
	I0108 20:10:58.979403  639301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:10:58.991001  639301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:10:58.995533  639301 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:10:58.995594  639301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:10:59.004298  639301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:10:59.015676  639301 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:10:59.019836  639301 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:10:59.019906  639301 kubeadm.go:404] StartCluster: {Name:addons-888287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-888287 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:59.019992  639301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:10:59.020051  639301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:10:59.065949  639301 cri.go:89] found id: ""
	I0108 20:10:59.066016  639301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:10:59.077050  639301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:10:59.087657  639301 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 20:10:59.087731  639301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:10:59.097996  639301 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:10:59.098049  639301 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 20:10:59.153352  639301 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 20:10:59.153615  639301 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:10:59.198819  639301 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:10:59.198893  639301 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0108 20:10:59.198934  639301 kubeadm.go:322] OS: Linux
	I0108 20:10:59.198983  639301 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 20:10:59.199032  639301 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 20:10:59.199081  639301 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 20:10:59.199130  639301 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 20:10:59.199179  639301 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 20:10:59.199231  639301 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 20:10:59.199277  639301 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 20:10:59.199326  639301 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 20:10:59.199374  639301 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 20:10:59.274734  639301 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:10:59.274844  639301 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:10:59.274938  639301 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:10:59.520662  639301 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:10:59.524004  639301 out.go:204]   - Generating certificates and keys ...
	I0108 20:10:59.524109  639301 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:10:59.524185  639301 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:10:59.986528  639301 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:11:00.272886  639301 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:11:00.494926  639301 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:11:00.885518  639301 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:11:01.461854  639301 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:11:01.461992  639301 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-888287 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:11:01.979373  639301 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:11:01.979593  639301 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-888287 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:11:02.984240  639301 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:11:03.294055  639301 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:11:04.016229  639301 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:11:04.016408  639301 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:11:04.701075  639301 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:11:05.118467  639301 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:11:06.129611  639301 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:11:06.395970  639301 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:11:06.396707  639301 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:11:06.401061  639301 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:11:06.403481  639301 out.go:204]   - Booting up control plane ...
	I0108 20:11:06.403577  639301 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:11:06.403655  639301 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:11:06.404327  639301 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:11:06.414610  639301 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:11:06.415817  639301 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:11:06.416060  639301 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:11:06.519164  639301 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:11:13.521353  639301 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002304 seconds
	I0108 20:11:13.521468  639301 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:11:13.538707  639301 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:11:14.067336  639301 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:11:14.067522  639301 kubeadm.go:322] [mark-control-plane] Marking the node addons-888287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:11:14.589457  639301 kubeadm.go:322] [bootstrap-token] Using token: 6uacg9.typkd5p35n8o6sry
	I0108 20:11:14.592153  639301 out.go:204]   - Configuring RBAC rules ...
	I0108 20:11:14.592274  639301 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:11:14.616090  639301 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:11:14.627582  639301 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:11:14.631102  639301 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:11:14.637366  639301 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:11:14.641148  639301 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:11:14.658163  639301 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:11:14.905354  639301 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:11:15.055205  639301 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:11:15.055233  639301 kubeadm.go:322] 
	I0108 20:11:15.055296  639301 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:11:15.055311  639301 kubeadm.go:322] 
	I0108 20:11:15.055385  639301 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:11:15.055393  639301 kubeadm.go:322] 
	I0108 20:11:15.055418  639301 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:11:15.055478  639301 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:11:15.055531  639301 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:11:15.055541  639301 kubeadm.go:322] 
	I0108 20:11:15.055592  639301 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 20:11:15.055599  639301 kubeadm.go:322] 
	I0108 20:11:15.055644  639301 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:11:15.055653  639301 kubeadm.go:322] 
	I0108 20:11:15.055702  639301 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:11:15.055776  639301 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:11:15.055844  639301 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:11:15.055853  639301 kubeadm.go:322] 
	I0108 20:11:15.055932  639301 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:11:15.056008  639301 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:11:15.056016  639301 kubeadm.go:322] 
	I0108 20:11:15.056095  639301 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6uacg9.typkd5p35n8o6sry \
	I0108 20:11:15.056196  639301 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a \
	I0108 20:11:15.056220  639301 kubeadm.go:322] 	--control-plane 
	I0108 20:11:15.056225  639301 kubeadm.go:322] 
	I0108 20:11:15.056307  639301 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:11:15.056313  639301 kubeadm.go:322] 
	I0108 20:11:15.056395  639301 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6uacg9.typkd5p35n8o6sry \
	I0108 20:11:15.056494  639301 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a 
	I0108 20:11:15.060270  639301 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 20:11:15.060398  639301 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:11:15.060419  639301 cni.go:84] Creating CNI manager for ""
	I0108 20:11:15.060428  639301 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:11:15.064344  639301 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:11:15.066816  639301 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:11:15.085636  639301 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:11:15.085672  639301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:11:15.154172  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:11:16.009657  639301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:11:16.009784  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:16.009788  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=addons-888287 minikube.k8s.io/updated_at=2024_01_08T20_11_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:16.191053  639301 ops.go:34] apiserver oom_adj: -16
	I0108 20:11:16.191159  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:16.691272  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:17.191995  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:17.692151  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:18.191666  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:18.692209  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:19.191440  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:19.692102  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:20.191338  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:20.691649  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:21.191936  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:21.691799  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:22.192078  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:22.691624  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:23.192237  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:23.691635  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:24.191223  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:24.691938  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:25.192014  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:25.692240  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:26.191969  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:26.691693  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:27.191318  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:27.691249  639301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:11:27.887155  639301 kubeadm.go:1088] duration metric: took 11.877449695s to wait for elevateKubeSystemPrivileges.
	I0108 20:11:27.887187  639301 kubeadm.go:406] StartCluster complete in 28.867309988s
	I0108 20:11:27.887205  639301 settings.go:142] acquiring lock: {Name:mk63cb8f057d0d432df7260ff815cc6f0354f468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:27.887315  639301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:11:27.887695  639301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/kubeconfig: {Name:mk2f931b682c68dbcf44ed887f090aab8cb1a7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:27.889884  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:11:27.890128  639301 config.go:182] Loaded profile config "addons-888287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:11:27.890163  639301 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 20:11:27.890239  639301 addons.go:69] Setting yakd=true in profile "addons-888287"
	I0108 20:11:27.890256  639301 addons.go:237] Setting addon yakd=true in "addons-888287"
	I0108 20:11:27.890294  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.891018  639301 addons.go:69] Setting ingress-dns=true in profile "addons-888287"
	I0108 20:11:27.891039  639301 addons.go:237] Setting addon ingress-dns=true in "addons-888287"
	I0108 20:11:27.891077  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.891489  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.891928  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.892191  639301 addons.go:69] Setting cloud-spanner=true in profile "addons-888287"
	I0108 20:11:27.892215  639301 addons.go:237] Setting addon cloud-spanner=true in "addons-888287"
	I0108 20:11:27.892250  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.892622  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.895353  639301 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-888287"
	I0108 20:11:27.895399  639301 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-888287"
	I0108 20:11:27.895439  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.895831  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.896426  639301 addons.go:69] Setting inspektor-gadget=true in profile "addons-888287"
	I0108 20:11:27.896449  639301 addons.go:237] Setting addon inspektor-gadget=true in "addons-888287"
	I0108 20:11:27.896486  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.896879  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.900983  639301 addons.go:69] Setting metrics-server=true in profile "addons-888287"
	I0108 20:11:27.901005  639301 addons.go:237] Setting addon metrics-server=true in "addons-888287"
	I0108 20:11:27.901049  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.901440  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.901667  639301 addons.go:69] Setting default-storageclass=true in profile "addons-888287"
	I0108 20:11:27.901683  639301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-888287"
	I0108 20:11:27.905757  639301 addons.go:69] Setting gcp-auth=true in profile "addons-888287"
	I0108 20:11:27.905784  639301 mustload.go:65] Loading cluster: addons-888287
	I0108 20:11:27.906077  639301 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-888287"
	I0108 20:11:27.906091  639301 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-888287"
	I0108 20:11:27.906127  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.906705  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.906999  639301 addons.go:69] Setting ingress=true in profile "addons-888287"
	I0108 20:11:27.907017  639301 addons.go:237] Setting addon ingress=true in "addons-888287"
	I0108 20:11:27.907060  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.907439  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.930694  639301 addons.go:69] Setting registry=true in profile "addons-888287"
	I0108 20:11:27.930768  639301 addons.go:237] Setting addon registry=true in "addons-888287"
	I0108 20:11:27.930847  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.931334  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.957343  639301 addons.go:69] Setting storage-provisioner=true in profile "addons-888287"
	I0108 20:11:27.957417  639301 addons.go:237] Setting addon storage-provisioner=true in "addons-888287"
	I0108 20:11:27.957499  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:27.957983  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.986910  639301 config.go:182] Loaded profile config "addons-888287": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:11:27.987210  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:27.958170  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:28.002875  639301 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-888287"
	I0108 20:11:28.002965  639301 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-888287"
	I0108 20:11:28.003435  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:28.018531  639301 addons.go:69] Setting volumesnapshots=true in profile "addons-888287"
	I0108 20:11:28.018609  639301 addons.go:237] Setting addon volumesnapshots=true in "addons-888287"
	I0108 20:11:28.018686  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:28.019194  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:28.058058  639301 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 20:11:28.064419  639301 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 20:11:28.064446  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 20:11:28.064514  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.101779  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 20:11:28.104627  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 20:11:28.107444  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 20:11:28.109757  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 20:11:28.113214  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 20:11:28.115839  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 20:11:28.137293  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 20:11:28.144764  639301 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 20:11:28.144828  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 20:11:28.144833  639301 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 20:11:28.148193  639301 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:11:28.150157  639301 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 20:11:28.152672  639301 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 20:11:28.152678  639301 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 20:11:28.152692  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 20:11:28.155255  639301 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:11:28.155269  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 20:11:28.157684  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.159997  639301 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 20:11:28.160015  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 20:11:28.160075  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.177712  639301 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 20:11:28.177736  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 20:11:28.177802  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.183727  639301 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 20:11:28.183752  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 20:11:28.183815  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.202198  639301 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 20:11:28.204374  639301 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:11:28.204391  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 20:11:28.204446  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.203277  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.226961  639301 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:11:28.219459  639301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:11:28.240243  639301 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 20:11:28.246626  639301 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:11:28.246647  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:11:28.246722  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.251272  639301 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 20:11:28.274982  639301 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 20:11:28.275003  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 20:11:28.275067  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.249143  639301 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 20:11:28.298762  639301 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:11:28.298829  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 20:11:28.298926  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.310800  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:28.319208  639301 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 20:11:28.321519  639301 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 20:11:28.321541  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 20:11:28.321609  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.329585  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:11:28.330595  639301 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-888287"
	I0108 20:11:28.330638  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:28.331084  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:28.357787  639301 addons.go:237] Setting addon default-storageclass=true in "addons-888287"
	I0108 20:11:28.357827  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:28.358251  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:28.374358  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.445668  639301 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-888287" context rescaled to 1 replicas
	I0108 20:11:28.445705  639301 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:11:28.448215  639301 out.go:177] * Verifying Kubernetes components...
	I0108 20:11:28.452069  639301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:11:28.473977  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.491954  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.504812  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.505017  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.526944  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.546204  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.552605  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.586502  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.605156  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.609267  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.627320  639301 out.go:177]   - Using image docker.io/busybox:stable
	I0108 20:11:28.624946  639301 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:11:28.629544  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:11:28.631816  639301 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 20:11:28.629610  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.635940  639301 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:11:28.635963  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 20:11:28.636027  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:28.670746  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.678601  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:28.858288  639301 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 20:11:28.858313  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 20:11:28.880520  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 20:11:28.924563  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:11:28.961236  639301 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 20:11:28.961255  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 20:11:29.061581  639301 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 20:11:29.061613  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 20:11:29.068998  639301 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 20:11:29.069021  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 20:11:29.076178  639301 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 20:11:29.076203  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 20:11:29.084411  639301 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 20:11:29.084436  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 20:11:29.091740  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:11:29.128671  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:11:29.159997  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:11:29.178880  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:11:29.194173  639301 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 20:11:29.194192  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 20:11:29.203412  639301 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 20:11:29.203483  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 20:11:29.215715  639301 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 20:11:29.215777  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 20:11:29.219193  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:11:29.248792  639301 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 20:11:29.248862  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 20:11:29.255339  639301 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:11:29.255426  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 20:11:29.285652  639301 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 20:11:29.285713  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 20:11:29.348640  639301 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 20:11:29.348704  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 20:11:29.389248  639301 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 20:11:29.389331  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 20:11:29.392512  639301 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:11:29.392534  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 20:11:29.414241  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:11:29.418945  639301 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 20:11:29.418970  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 20:11:29.461713  639301 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 20:11:29.461738  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 20:11:29.509787  639301 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 20:11:29.509812  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 20:11:29.549382  639301 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 20:11:29.549409  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 20:11:29.581722  639301 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:11:29.581746  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 20:11:29.649229  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:11:29.658824  639301 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 20:11:29.658885  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 20:11:29.723872  639301 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 20:11:29.723898  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 20:11:29.727925  639301 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 20:11:29.727948  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 20:11:29.805732  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:11:29.860584  639301 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 20:11:29.860610  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 20:11:29.913102  639301 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 20:11:29.913130  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 20:11:29.921740  639301 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:11:29.921764  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 20:11:29.984954  639301 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 20:11:29.984980  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 20:11:30.044553  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:11:30.054264  639301 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:11:30.054307  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 20:11:30.128375  639301 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 20:11:30.128401  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 20:11:30.281445  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:11:30.344448  639301 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 20:11:30.344481  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 20:11:30.481608  639301 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:11:30.481633  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 20:11:30.581529  639301 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.251910914s)
	I0108 20:11:30.581568  639301 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 20:11:30.581611  639301 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.129521419s)
	I0108 20:11:30.582510  639301 node_ready.go:35] waiting up to 6m0s for node "addons-888287" to be "Ready" ...
	I0108 20:11:30.636967  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:11:32.590204  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.70964647s)
	I0108 20:11:32.590273  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.665686989s)
	I0108 20:11:32.734042  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:33.608010  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.516233486s)
	I0108 20:11:34.379185  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.250476273s)
	I0108 20:11:34.379258  639301 addons.go:473] Verifying addon ingress=true in "addons-888287"
	I0108 20:11:34.381829  639301 out.go:177] * Verifying ingress addon...
	I0108 20:11:34.379435  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.219413473s)
	I0108 20:11:34.379460  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.200552687s)
	I0108 20:11:34.379500  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.160252017s)
	I0108 20:11:34.379552  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.965281525s)
	I0108 20:11:34.379574  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.730289075s)
	I0108 20:11:34.379604  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.573846601s)
	I0108 20:11:34.379720  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.09823898s)
	I0108 20:11:34.379833  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.335249872s)
	I0108 20:11:34.382232  639301 addons.go:473] Verifying addon metrics-server=true in "addons-888287"
	I0108 20:11:34.382246  639301 addons.go:473] Verifying addon registry=true in "addons-888287"
	W0108 20:11:34.382400  639301 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:11:34.386338  639301 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 20:11:34.388717  639301 out.go:177] * Verifying registry addon...
	I0108 20:11:34.388817  639301 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-888287 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 20:11:34.392493  639301 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 20:11:34.388834  639301 retry.go:31] will retry after 252.877749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:11:34.397971  639301 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 20:11:34.406779  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0108 20:11:34.434161  639301 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0108 20:11:34.437544  639301 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 20:11:34.437569  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:34.624957  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.987934401s)
	I0108 20:11:34.625040  639301 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-888287"
	I0108 20:11:34.627563  639301 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 20:11:34.630707  639301 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 20:11:34.645094  639301 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 20:11:34.645119  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:34.660392  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:11:34.906500  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:34.920795  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:35.094104  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:35.166741  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:35.209268  639301 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 20:11:35.209386  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:35.235893  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:35.366810  639301 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 20:11:35.393439  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:35.394453  639301 addons.go:237] Setting addon gcp-auth=true in "addons-888287"
	I0108 20:11:35.394543  639301 host.go:66] Checking if "addons-888287" exists ...
	I0108 20:11:35.395095  639301 cli_runner.go:164] Run: docker container inspect addons-888287 --format={{.State.Status}}
	I0108 20:11:35.418966  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:35.425804  639301 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 20:11:35.425856  639301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-888287
	I0108 20:11:35.460190  639301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33404 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/addons-888287/id_rsa Username:docker}
	I0108 20:11:35.637173  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:35.908907  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:35.925551  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:36.151778  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:36.285505  639301 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:11:36.288337  639301 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 20:11:36.285655  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.625222658s)
	I0108 20:11:36.291789  639301 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 20:11:36.291821  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 20:11:36.365736  639301 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 20:11:36.365765  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 20:11:36.393660  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:36.412895  639301 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:11:36.412962  639301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 20:11:36.413503  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:36.454947  639301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:11:36.668420  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:36.895451  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:36.911706  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:37.135127  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:37.406352  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:37.418935  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:37.632227  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:37.674504  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:37.855524  639301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.400492026s)
	I0108 20:11:37.858506  639301 addons.go:473] Verifying addon gcp-auth=true in "addons-888287"
	I0108 20:11:37.863547  639301 out.go:177] * Verifying gcp-auth addon...
	I0108 20:11:37.867248  639301 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 20:11:37.894637  639301 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 20:11:37.894711  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:37.908567  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:37.926337  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:38.138688  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:38.375442  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:38.393720  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:38.414852  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:38.636807  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:38.871524  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:38.895883  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:38.912644  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:39.136450  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:39.371701  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:39.393267  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:39.411722  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:39.636441  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:39.872605  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:39.894521  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:39.911105  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:40.087144  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:40.136244  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:40.372644  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:40.396315  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:40.411908  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:40.637756  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:40.872047  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:40.894764  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:40.913458  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:41.135043  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:41.371357  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:41.393376  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:41.412592  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:41.635818  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:41.871339  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:41.893716  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:41.910988  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:42.136461  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:42.371847  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:42.393201  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:42.411194  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:42.586258  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:42.636974  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:42.871086  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:42.893240  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:42.911206  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:43.134891  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:43.371422  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:43.393441  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:43.411372  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:43.635872  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:43.871453  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:43.892470  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:43.910273  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:44.135788  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:44.371347  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:44.393285  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:44.411188  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:44.586566  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:44.635803  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:44.871563  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:44.893273  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:44.911351  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:45.136250  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:45.372235  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:45.393297  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:45.414259  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:45.635765  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:45.870489  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:45.893417  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:45.911837  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:46.135471  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:46.371243  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:46.392992  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:46.411070  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:46.636118  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:46.871104  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:46.893216  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:46.911362  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:47.086572  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:47.136083  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:47.370575  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:47.393408  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:47.411317  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:47.635892  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:47.871030  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:47.892554  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:47.910388  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:48.136715  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:48.371501  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:48.392621  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:48.410560  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:48.635039  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:48.870783  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:48.892871  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:48.910759  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:49.135279  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:49.371640  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:49.393202  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:49.410728  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:49.585710  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:49.635450  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:49.871463  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:49.893284  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:49.911445  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:50.136012  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:50.371195  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:50.393406  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:50.411555  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:50.635333  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:50.871231  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:50.892863  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:50.911010  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:51.135522  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:51.371454  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:51.393338  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:51.411067  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:51.585997  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:51.635762  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:51.871443  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:51.893369  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:51.910300  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:52.135280  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:52.371407  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:52.393538  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:52.410499  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:52.635540  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:52.871144  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:52.893328  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:52.911133  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:53.134730  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:53.371276  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:53.392872  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:53.410582  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:53.635461  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:53.870942  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:53.892403  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:53.911045  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:54.086862  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:54.135103  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:54.371233  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:54.392941  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:54.410897  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:54.634972  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:54.871478  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:54.893192  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:54.911328  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:55.135825  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:55.371556  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:55.393255  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:55.411191  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:55.635512  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:55.871507  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:55.892522  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:55.910221  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:56.137234  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:56.371309  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:56.393468  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:56.411304  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:56.586244  639301 node_ready.go:58] node "addons-888287" has status "Ready":"False"
	I0108 20:11:56.636378  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:56.871210  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:56.901136  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:56.913706  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:57.135177  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:57.371551  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:57.395453  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:57.412311  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:57.636316  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:57.871089  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:57.893093  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:57.910950  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:58.135782  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:58.373641  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:58.392925  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:58.411687  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:58.590333  639301 node_ready.go:49] node "addons-888287" has status "Ready":"True"
	I0108 20:11:58.590362  639301 node_ready.go:38] duration metric: took 28.007823904s waiting for node "addons-888287" to be "Ready" ...
	I0108 20:11:58.590374  639301 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:11:58.602131  639301 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ppg77" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:58.665323  639301 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 20:11:58.665350  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:58.880416  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:58.898705  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:58.914453  639301 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 20:11:58.914472  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:59.144840  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:59.388982  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:59.413304  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:59.466022  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:59.636443  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:59.871513  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:59.892797  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:59.911561  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:00.140294  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:00.371947  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:00.394505  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:00.412326  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:00.619028  639301 pod_ready.go:92] pod "coredns-5dd5756b68-ppg77" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:00.619097  639301 pod_ready.go:81] duration metric: took 2.01692823s waiting for pod "coredns-5dd5756b68-ppg77" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.619133  639301 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-888287" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.655841  639301 pod_ready.go:92] pod "etcd-addons-888287" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:00.656006  639301 pod_ready.go:81] duration metric: took 36.852126ms waiting for pod "etcd-addons-888287" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.656040  639301 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-888287" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.656886  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:00.667665  639301 pod_ready.go:92] pod "kube-apiserver-addons-888287" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:00.667688  639301 pod_ready.go:81] duration metric: took 11.612976ms waiting for pod "kube-apiserver-addons-888287" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.667698  639301 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-888287" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.673130  639301 pod_ready.go:92] pod "kube-controller-manager-addons-888287" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:00.673195  639301 pod_ready.go:81] duration metric: took 5.487803ms waiting for pod "kube-controller-manager-addons-888287" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.673223  639301 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgq7f" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.679345  639301 pod_ready.go:92] pod "kube-proxy-rgq7f" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:00.679412  639301 pod_ready.go:81] duration metric: took 6.165731ms waiting for pod "kube-proxy-rgq7f" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.679437  639301 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-888287" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:00.871847  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:00.893823  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:00.913380  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:01.006727  639301 pod_ready.go:92] pod "kube-scheduler-addons-888287" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:01.006797  639301 pod_ready.go:81] duration metric: took 327.339115ms waiting for pod "kube-scheduler-addons-888287" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:01.006824  639301 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:01.138028  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:01.373021  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:01.393166  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:01.411331  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:01.637648  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:01.872200  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:01.894574  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:01.912550  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:02.137635  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:02.371966  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:02.394109  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:02.414063  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:02.639310  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:02.871599  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:02.893906  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:02.912856  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:03.014809  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:03.136913  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:03.372936  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:03.393721  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:03.411446  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:03.636879  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:03.871546  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:03.894689  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:03.917107  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:04.137638  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:04.372103  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:04.394196  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:04.417134  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:04.635946  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:04.878410  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:04.893322  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:04.912120  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:05.136555  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:05.371239  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:05.393504  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:05.411186  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:05.513924  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:05.637407  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:05.887096  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:05.900043  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:05.916263  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:06.138279  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:06.371174  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:06.393926  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:06.412040  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:06.636318  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:06.871028  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:06.893515  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:06.911795  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:07.136485  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:07.371299  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:07.393208  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:07.415669  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:07.637068  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:07.871795  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:07.895081  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:07.912317  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:08.014707  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:08.136759  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:08.379047  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:08.394204  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:08.412395  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:08.637377  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:08.871090  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:08.894279  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:08.912547  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:09.136903  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:09.372097  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:09.393722  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:09.412626  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:09.637555  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:09.871358  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:09.893338  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:09.912122  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:10.136362  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:10.371818  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:10.393803  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:10.411418  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:10.513447  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:10.638298  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:10.871842  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:10.892767  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:10.911245  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:11.137393  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:11.371568  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:11.394117  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:11.411873  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:11.684447  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:11.871714  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:11.895919  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:11.919859  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:12.138491  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:12.371965  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:12.397210  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:12.412590  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:12.517176  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:12.654463  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:12.872059  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:12.895030  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:12.912966  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:13.137060  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:13.377565  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:13.396149  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:13.419288  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:13.637853  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:13.871718  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:13.895136  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:13.912479  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:14.136266  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:14.371152  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:14.393584  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:14.410865  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:14.643177  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:14.870950  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:14.894232  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:14.911810  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:15.014177  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:15.138696  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:15.371028  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:15.393754  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:15.411869  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:15.655620  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:15.872747  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:15.894017  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:15.913729  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:16.137395  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:16.373714  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:16.393921  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:16.416185  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:16.637182  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:16.872536  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:16.893317  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:16.916504  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:17.015031  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:17.137306  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:17.371877  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:17.396561  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:17.412241  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:17.636737  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:17.871623  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:17.894118  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:17.912676  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:18.137707  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:18.371943  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:18.393012  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:18.411812  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:18.637083  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:18.881560  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:18.897327  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:18.914524  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:19.016007  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:19.147542  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:19.371262  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:19.396040  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:19.412435  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:19.636632  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:19.871600  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:19.894167  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:19.917440  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:20.137604  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:20.371466  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:20.392951  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:20.411715  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:20.636983  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:20.871093  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:20.893329  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:20.911624  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:21.136918  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:21.376263  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:21.393899  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:21.412403  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:21.513062  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:21.637468  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:21.870976  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:21.894241  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:21.911751  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:22.136497  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:22.371905  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:22.393829  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:22.411984  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:22.643361  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:22.871831  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:22.894465  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:22.913852  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:23.137026  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:23.372766  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:23.395244  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:23.424831  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:23.518917  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:23.637342  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:23.877786  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:23.894465  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:23.912144  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:24.136975  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:24.371494  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:24.392854  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:24.413259  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:24.637264  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:24.871824  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:24.893733  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:24.911888  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:25.137930  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:25.371710  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:25.393449  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:25.411810  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:25.693143  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:25.872500  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:25.893120  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:25.912090  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:26.016366  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:26.136724  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:26.371174  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:26.393349  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:26.411779  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:26.636129  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:26.870830  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:26.894235  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:26.911798  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:27.138605  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:27.371560  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:27.393264  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:27.417294  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:27.637032  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:27.872709  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:27.895365  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:27.912302  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:28.135851  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:28.371502  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:28.394694  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:28.411880  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:28.513253  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:28.636703  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:28.871514  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:28.892747  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:28.911030  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:29.136912  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:29.371333  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:29.393364  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:29.411814  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:29.636868  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:29.871601  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:29.893402  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:29.912049  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:30.137657  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:30.372531  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:30.394768  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:30.412684  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:30.514772  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:30.638194  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:30.872068  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:30.894632  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:30.912335  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:31.137069  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:31.371057  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:31.394949  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:31.417925  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:31.636822  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:31.885798  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:31.895111  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:31.917934  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:32.136326  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:32.371150  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:32.393930  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:32.412355  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:32.637138  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:32.871566  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:32.902068  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:32.912865  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:33.014274  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:33.137849  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:33.371554  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:33.393159  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:33.417430  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:33.638408  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:33.871491  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:33.893277  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:33.913984  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:34.137641  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:34.370962  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:34.393291  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:34.411907  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:34.636814  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:34.872040  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:34.894083  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:34.913842  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:35.015377  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:35.170547  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:35.372434  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:35.394146  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:35.411821  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:35.636855  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:35.873739  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:35.899206  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:35.914061  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:36.137428  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:36.385264  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:36.394555  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:36.413587  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:36.637144  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:36.883319  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:36.893675  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:36.913928  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:37.016446  639301 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"False"
	I0108 20:12:37.138244  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:37.371476  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:37.395609  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:37.413147  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:37.645364  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:37.874752  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:37.898867  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:37.927950  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:38.158183  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:38.371700  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:38.393282  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:38.412452  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:38.517840  639301 pod_ready.go:92] pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:38.517866  639301 pod_ready.go:81] duration metric: took 37.511020442s waiting for pod "metrics-server-7c66d45ddc-q2j5d" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:38.517885  639301 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-59965" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:38.527016  639301 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-59965" in "kube-system" namespace has status "Ready":"True"
	I0108 20:12:38.527050  639301 pod_ready.go:81] duration metric: took 9.155552ms waiting for pod "nvidia-device-plugin-daemonset-59965" in "kube-system" namespace to be "Ready" ...
	I0108 20:12:38.527072  639301 pod_ready.go:38] duration metric: took 39.936685428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:12:38.527090  639301 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:12:38.527130  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:12:38.527201  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:12:38.601900  639301 cri.go:89] found id: "1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0"
	I0108 20:12:38.601934  639301 cri.go:89] found id: ""
	I0108 20:12:38.601942  639301 logs.go:284] 1 containers: [1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0]
	I0108 20:12:38.602006  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:38.607610  639301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:12:38.607692  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:12:38.642999  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:38.684800  639301 cri.go:89] found id: "189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f"
	I0108 20:12:38.684823  639301 cri.go:89] found id: ""
	I0108 20:12:38.684831  639301 logs.go:284] 1 containers: [189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f]
	I0108 20:12:38.684892  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:38.691137  639301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:12:38.691219  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:12:38.750247  639301 cri.go:89] found id: "589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac"
	I0108 20:12:38.750282  639301 cri.go:89] found id: ""
	I0108 20:12:38.750291  639301 logs.go:284] 1 containers: [589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac]
	I0108 20:12:38.750350  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:38.756612  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:12:38.756689  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:12:38.814057  639301 cri.go:89] found id: "04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d"
	I0108 20:12:38.814125  639301 cri.go:89] found id: ""
	I0108 20:12:38.814146  639301 logs.go:284] 1 containers: [04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d]
	I0108 20:12:38.814232  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:38.820145  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:12:38.820284  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:12:38.872292  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:38.875432  639301 cri.go:89] found id: "f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3"
	I0108 20:12:38.875485  639301 cri.go:89] found id: ""
	I0108 20:12:38.875505  639301 logs.go:284] 1 containers: [f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3]
	I0108 20:12:38.875595  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:38.880547  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:12:38.880659  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:12:38.892902  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:38.912148  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:38.966651  639301 cri.go:89] found id: "1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292"
	I0108 20:12:38.966683  639301 cri.go:89] found id: ""
	I0108 20:12:38.966694  639301 logs.go:284] 1 containers: [1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292]
	I0108 20:12:38.966764  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:38.979035  639301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:12:38.979152  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:12:39.040388  639301 cri.go:89] found id: "3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226"
	I0108 20:12:39.040412  639301 cri.go:89] found id: ""
	I0108 20:12:39.040421  639301 logs.go:284] 1 containers: [3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226]
	I0108 20:12:39.040486  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:39.047418  639301 logs.go:123] Gathering logs for kube-scheduler [04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d] ...
	I0108 20:12:39.047456  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d"
	I0108 20:12:39.138220  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:39.169557  639301 logs.go:123] Gathering logs for kube-controller-manager [1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292] ...
	I0108 20:12:39.169588  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292"
	I0108 20:12:39.321811  639301 logs.go:123] Gathering logs for container status ...
	I0108 20:12:39.321889  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:12:39.373656  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:39.393862  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:39.421401  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:39.430166  639301 logs.go:123] Gathering logs for kubelet ...
	I0108 20:12:39.430196  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 20:12:39.562038  639301 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:12:39.562111  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:12:39.637113  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:39.776936  639301 logs.go:123] Gathering logs for etcd [189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f] ...
	I0108 20:12:39.777015  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f"
	I0108 20:12:39.855884  639301 logs.go:123] Gathering logs for coredns [589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac] ...
	I0108 20:12:39.855954  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac"
	I0108 20:12:39.871624  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:39.894319  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:39.913111  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:39.914792  639301 logs.go:123] Gathering logs for kube-proxy [f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3] ...
	I0108 20:12:39.914816  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3"
	I0108 20:12:39.990263  639301 logs.go:123] Gathering logs for kindnet [3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226] ...
	I0108 20:12:39.990333  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226"
	I0108 20:12:40.089625  639301 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:12:40.089700  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:12:40.136978  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:40.201383  639301 logs.go:123] Gathering logs for dmesg ...
	I0108 20:12:40.201496  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:12:40.255484  639301 logs.go:123] Gathering logs for kube-apiserver [1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0] ...
	I0108 20:12:40.255666  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0"
	I0108 20:12:40.379751  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:40.393532  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:40.410800  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:40.639117  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:40.871313  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:40.894251  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:40.912357  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:41.140147  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:41.371158  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:41.393763  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:41.412955  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:41.637684  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:41.881788  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:41.893800  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:41.912845  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:12:42.141411  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:42.371556  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:42.393140  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:42.412456  639301 kapi.go:107] duration metric: took 1m8.019958534s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 20:12:42.636585  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:42.870914  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:42.893359  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:42.895611  639301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:12:42.910667  639301 api_server.go:72] duration metric: took 1m14.464924548s to wait for apiserver process to appear ...
	I0108 20:12:42.910734  639301 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:12:42.910782  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:12:42.910867  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:12:42.970115  639301 cri.go:89] found id: "1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0"
	I0108 20:12:42.970149  639301 cri.go:89] found id: ""
	I0108 20:12:42.970158  639301 logs.go:284] 1 containers: [1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0]
	I0108 20:12:42.970223  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:42.974959  639301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:12:42.975027  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:12:43.020874  639301 cri.go:89] found id: "189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f"
	I0108 20:12:43.020897  639301 cri.go:89] found id: ""
	I0108 20:12:43.020905  639301 logs.go:284] 1 containers: [189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f]
	I0108 20:12:43.020960  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:43.025597  639301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:12:43.025677  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:12:43.084264  639301 cri.go:89] found id: "589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac"
	I0108 20:12:43.084288  639301 cri.go:89] found id: ""
	I0108 20:12:43.084296  639301 logs.go:284] 1 containers: [589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac]
	I0108 20:12:43.084375  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:43.095288  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:12:43.095354  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:12:43.166282  639301 cri.go:89] found id: "04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d"
	I0108 20:12:43.166302  639301 cri.go:89] found id: ""
	I0108 20:12:43.166310  639301 logs.go:284] 1 containers: [04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d]
	I0108 20:12:43.166366  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:43.180504  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:12:43.180588  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:12:43.199960  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:43.342789  639301 cri.go:89] found id: "f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3"
	I0108 20:12:43.342852  639301 cri.go:89] found id: ""
	I0108 20:12:43.342873  639301 logs.go:284] 1 containers: [f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3]
	I0108 20:12:43.342960  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:43.366498  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:12:43.366587  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:12:43.409680  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:43.411772  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:43.506328  639301 cri.go:89] found id: "1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292"
	I0108 20:12:43.506353  639301 cri.go:89] found id: ""
	I0108 20:12:43.506368  639301 logs.go:284] 1 containers: [1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292]
	I0108 20:12:43.506423  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:43.538646  639301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:12:43.538736  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:12:43.637883  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:43.648746  639301 cri.go:89] found id: "3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226"
	I0108 20:12:43.648776  639301 cri.go:89] found id: ""
	I0108 20:12:43.648784  639301 logs.go:284] 1 containers: [3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226]
	I0108 20:12:43.648838  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:43.653836  639301 logs.go:123] Gathering logs for kube-scheduler [04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d] ...
	I0108 20:12:43.653867  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d"
	I0108 20:12:43.738390  639301 logs.go:123] Gathering logs for kube-proxy [f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3] ...
	I0108 20:12:43.738485  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3"
	I0108 20:12:43.817445  639301 logs.go:123] Gathering logs for kube-controller-manager [1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292] ...
	I0108 20:12:43.817470  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292"
	I0108 20:12:43.871448  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:43.895639  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:43.934810  639301 logs.go:123] Gathering logs for kindnet [3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226] ...
	I0108 20:12:43.934887  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226"
	I0108 20:12:44.011634  639301 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:12:44.011710  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:12:44.144444  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:44.283221  639301 logs.go:123] Gathering logs for kube-apiserver [1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0] ...
	I0108 20:12:44.283295  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0"
	I0108 20:12:44.372158  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:44.378660  639301 logs.go:123] Gathering logs for etcd [189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f] ...
	I0108 20:12:44.378722  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f"
	I0108 20:12:44.393381  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:44.437631  639301 logs.go:123] Gathering logs for coredns [589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac] ...
	I0108 20:12:44.437660  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac"
	I0108 20:12:44.485255  639301 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:12:44.485286  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:12:44.578755  639301 logs.go:123] Gathering logs for container status ...
	I0108 20:12:44.578790  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:12:44.635845  639301 logs.go:123] Gathering logs for kubelet ...
	I0108 20:12:44.635878  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 20:12:44.641840  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:44.722809  639301 logs.go:123] Gathering logs for dmesg ...
	I0108 20:12:44.722844  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:12:44.871116  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:44.893272  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:45.139840  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:45.371677  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:45.393869  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:45.637042  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:45.871851  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:45.895509  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:46.137807  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:46.372163  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:46.394851  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:46.642584  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:46.871654  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:46.893498  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:47.137004  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:47.245965  639301 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 20:12:47.255724  639301 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 20:12:47.257019  639301 api_server.go:141] control plane version: v1.28.4
	I0108 20:12:47.257043  639301 api_server.go:131] duration metric: took 4.346288789s to wait for apiserver health ...
	I0108 20:12:47.257063  639301 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:12:47.257083  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:12:47.257147  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:12:47.302910  639301 cri.go:89] found id: "1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0"
	I0108 20:12:47.302932  639301 cri.go:89] found id: ""
	I0108 20:12:47.302940  639301 logs.go:284] 1 containers: [1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0]
	I0108 20:12:47.302995  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:47.307681  639301 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:12:47.307771  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:12:47.351549  639301 cri.go:89] found id: "189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f"
	I0108 20:12:47.351569  639301 cri.go:89] found id: ""
	I0108 20:12:47.351577  639301 logs.go:284] 1 containers: [189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f]
	I0108 20:12:47.351634  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:47.356088  639301 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:12:47.356160  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:12:47.371983  639301 kapi.go:107] duration metric: took 1m9.504732566s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 20:12:47.375067  639301 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-888287 cluster.
	I0108 20:12:47.377520  639301 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 20:12:47.380198  639301 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 20:12:47.393787  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:47.404842  639301 cri.go:89] found id: "589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac"
	I0108 20:12:47.404871  639301 cri.go:89] found id: ""
	I0108 20:12:47.404880  639301 logs.go:284] 1 containers: [589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac]
	I0108 20:12:47.404934  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:47.409630  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:12:47.409706  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:12:47.461241  639301 cri.go:89] found id: "04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d"
	I0108 20:12:47.461261  639301 cri.go:89] found id: ""
	I0108 20:12:47.461270  639301 logs.go:284] 1 containers: [04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d]
	I0108 20:12:47.461323  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:47.466608  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:12:47.466675  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:12:47.523885  639301 cri.go:89] found id: "f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3"
	I0108 20:12:47.523908  639301 cri.go:89] found id: ""
	I0108 20:12:47.523925  639301 logs.go:284] 1 containers: [f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3]
	I0108 20:12:47.523979  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:47.529203  639301 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:12:47.529273  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:12:47.666575  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:47.862668  639301 cri.go:89] found id: "1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292"
	I0108 20:12:47.862755  639301 cri.go:89] found id: ""
	I0108 20:12:47.862784  639301 logs.go:284] 1 containers: [1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292]
	I0108 20:12:47.862935  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:47.878224  639301 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:12:47.878343  639301 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:12:47.893675  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:48.144706  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:48.218479  639301 cri.go:89] found id: "3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226"
	I0108 20:12:48.218504  639301 cri.go:89] found id: ""
	I0108 20:12:48.218512  639301 logs.go:284] 1 containers: [3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226]
	I0108 20:12:48.218578  639301 ssh_runner.go:195] Run: which crictl
	I0108 20:12:48.233998  639301 logs.go:123] Gathering logs for kubelet ...
	I0108 20:12:48.234021  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 20:12:48.332934  639301 logs.go:123] Gathering logs for dmesg ...
	I0108 20:12:48.333012  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:12:48.396917  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:48.410144  639301 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:12:48.410285  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:12:48.673607  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:48.738778  639301 logs.go:123] Gathering logs for kube-scheduler [04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d] ...
	I0108 20:12:48.738911  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d"
	I0108 20:12:48.893694  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:49.085768  639301 logs.go:123] Gathering logs for kube-proxy [f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3] ...
	I0108 20:12:49.085846  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3"
	I0108 20:12:49.136711  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:49.260533  639301 logs.go:123] Gathering logs for kube-controller-manager [1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292] ...
	I0108 20:12:49.260930  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292"
	I0108 20:12:49.396191  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:49.456107  639301 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:12:49.456143  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:12:49.636477  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:49.659614  639301 logs.go:123] Gathering logs for container status ...
	I0108 20:12:49.659987  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:12:49.734748  639301 logs.go:123] Gathering logs for kube-apiserver [1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0] ...
	I0108 20:12:49.734825  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0"
	I0108 20:12:49.840164  639301 logs.go:123] Gathering logs for etcd [189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f] ...
	I0108 20:12:49.840240  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f"
	I0108 20:12:49.893508  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:49.900648  639301 logs.go:123] Gathering logs for coredns [589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac] ...
	I0108 20:12:49.900706  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac"
	I0108 20:12:49.967378  639301 logs.go:123] Gathering logs for kindnet [3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226] ...
	I0108 20:12:49.967408  639301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226"
	I0108 20:12:50.137852  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:50.395966  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:50.644630  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:50.894954  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:51.137995  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:51.393797  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:51.637836  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:51.896578  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:52.137194  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:52.428329  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:52.555753  639301 system_pods.go:59] 18 kube-system pods found
	I0108 20:12:52.555792  639301 system_pods.go:61] "coredns-5dd5756b68-ppg77" [15a27c27-308f-4d5d-b585-8dbb73e80a52] Running
	I0108 20:12:52.555799  639301 system_pods.go:61] "csi-hostpath-attacher-0" [1114fab6-8f4f-455a-a045-455e3c3523d0] Running
	I0108 20:12:52.555806  639301 system_pods.go:61] "csi-hostpath-resizer-0" [eaa035d3-0bed-406e-a9c9-29f6f62e7282] Running
	I0108 20:12:52.555817  639301 system_pods.go:61] "csi-hostpathplugin-ss6hm" [018d7d09-66e4-4c1a-bc70-3857ce794274] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 20:12:52.555825  639301 system_pods.go:61] "etcd-addons-888287" [26c8ed9c-dbfe-482b-bb2f-4bd16c75b523] Running
	I0108 20:12:52.555837  639301 system_pods.go:61] "kindnet-ql4pm" [cef68aff-17a2-46ef-a506-a17d0eb15eda] Running
	I0108 20:12:52.555845  639301 system_pods.go:61] "kube-apiserver-addons-888287" [b359d8ec-74a2-47da-9455-dc0df920926a] Running
	I0108 20:12:52.555851  639301 system_pods.go:61] "kube-controller-manager-addons-888287" [d4a5053d-0e6a-47e8-bb55-346146b1540c] Running
	I0108 20:12:52.555864  639301 system_pods.go:61] "kube-ingress-dns-minikube" [63e47b0f-dd1b-45de-84c8-41d6e00dcc8f] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 20:12:52.555871  639301 system_pods.go:61] "kube-proxy-rgq7f" [a0f91555-53d4-4c8e-b38c-107bb2fa523e] Running
	I0108 20:12:52.555879  639301 system_pods.go:61] "kube-scheduler-addons-888287" [8a8d9458-d062-4e89-8f0a-61d08d6f433f] Running
	I0108 20:12:52.555885  639301 system_pods.go:61] "metrics-server-7c66d45ddc-q2j5d" [341d573d-1afd-4a0f-a2e3-1e9e775a827a] Running
	I0108 20:12:52.555891  639301 system_pods.go:61] "nvidia-device-plugin-daemonset-59965" [7ab86f51-f03a-4328-b2ac-dedbb03d23dd] Running
	I0108 20:12:52.555899  639301 system_pods.go:61] "registry-nxnl7" [9bd9f646-a0f5-4c14-83ae-cab1b85ed7d3] Running
	I0108 20:12:52.555904  639301 system_pods.go:61] "registry-proxy-njg8r" [ba0eee07-5e03-407b-a6dd-59f46344bb4c] Running
	I0108 20:12:52.555909  639301 system_pods.go:61] "snapshot-controller-58dbcc7b99-5th58" [7146e505-14c5-4659-9c88-99e4281c166e] Running
	I0108 20:12:52.555914  639301 system_pods.go:61] "snapshot-controller-58dbcc7b99-jh2n8" [0f4f6b0a-d5e1-40e0-b0c0-3f0b6e3465b9] Running
	I0108 20:12:52.555919  639301 system_pods.go:61] "storage-provisioner" [ce9acc4b-e803-41e4-8820-f99365067d88] Running
	I0108 20:12:52.555930  639301 system_pods.go:74] duration metric: took 5.298862259s to wait for pod list to return data ...
	I0108 20:12:52.555938  639301 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:12:52.559690  639301 default_sa.go:45] found service account: "default"
	I0108 20:12:52.559718  639301 default_sa.go:55] duration metric: took 3.770493ms for default service account to be created ...
	I0108 20:12:52.559730  639301 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:12:52.572206  639301 system_pods.go:86] 18 kube-system pods found
	I0108 20:12:52.572277  639301 system_pods.go:89] "coredns-5dd5756b68-ppg77" [15a27c27-308f-4d5d-b585-8dbb73e80a52] Running
	I0108 20:12:52.572299  639301 system_pods.go:89] "csi-hostpath-attacher-0" [1114fab6-8f4f-455a-a045-455e3c3523d0] Running
	I0108 20:12:52.572319  639301 system_pods.go:89] "csi-hostpath-resizer-0" [eaa035d3-0bed-406e-a9c9-29f6f62e7282] Running
	I0108 20:12:52.572358  639301 system_pods.go:89] "csi-hostpathplugin-ss6hm" [018d7d09-66e4-4c1a-bc70-3857ce794274] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 20:12:52.573669  639301 system_pods.go:89] "etcd-addons-888287" [26c8ed9c-dbfe-482b-bb2f-4bd16c75b523] Running
	I0108 20:12:52.573701  639301 system_pods.go:89] "kindnet-ql4pm" [cef68aff-17a2-46ef-a506-a17d0eb15eda] Running
	I0108 20:12:52.573734  639301 system_pods.go:89] "kube-apiserver-addons-888287" [b359d8ec-74a2-47da-9455-dc0df920926a] Running
	I0108 20:12:52.573762  639301 system_pods.go:89] "kube-controller-manager-addons-888287" [d4a5053d-0e6a-47e8-bb55-346146b1540c] Running
	I0108 20:12:52.573786  639301 system_pods.go:89] "kube-ingress-dns-minikube" [63e47b0f-dd1b-45de-84c8-41d6e00dcc8f] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 20:12:52.573808  639301 system_pods.go:89] "kube-proxy-rgq7f" [a0f91555-53d4-4c8e-b38c-107bb2fa523e] Running
	I0108 20:12:52.573841  639301 system_pods.go:89] "kube-scheduler-addons-888287" [8a8d9458-d062-4e89-8f0a-61d08d6f433f] Running
	I0108 20:12:52.573864  639301 system_pods.go:89] "metrics-server-7c66d45ddc-q2j5d" [341d573d-1afd-4a0f-a2e3-1e9e775a827a] Running
	I0108 20:12:52.573885  639301 system_pods.go:89] "nvidia-device-plugin-daemonset-59965" [7ab86f51-f03a-4328-b2ac-dedbb03d23dd] Running
	I0108 20:12:52.573904  639301 system_pods.go:89] "registry-nxnl7" [9bd9f646-a0f5-4c14-83ae-cab1b85ed7d3] Running
	I0108 20:12:52.573925  639301 system_pods.go:89] "registry-proxy-njg8r" [ba0eee07-5e03-407b-a6dd-59f46344bb4c] Running
	I0108 20:12:52.573968  639301 system_pods.go:89] "snapshot-controller-58dbcc7b99-5th58" [7146e505-14c5-4659-9c88-99e4281c166e] Running
	I0108 20:12:52.573987  639301 system_pods.go:89] "snapshot-controller-58dbcc7b99-jh2n8" [0f4f6b0a-d5e1-40e0-b0c0-3f0b6e3465b9] Running
	I0108 20:12:52.574006  639301 system_pods.go:89] "storage-provisioner" [ce9acc4b-e803-41e4-8820-f99365067d88] Running
	I0108 20:12:52.574039  639301 system_pods.go:126] duration metric: took 14.303138ms to wait for k8s-apps to be running ...
	I0108 20:12:52.574062  639301 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:12:52.574143  639301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:12:52.603706  639301 system_svc.go:56] duration metric: took 29.634968ms WaitForService to wait for kubelet.
	I0108 20:12:52.603728  639301 kubeadm.go:581] duration metric: took 1m24.157992246s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:12:52.603749  639301 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:12:52.607832  639301 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:12:52.607859  639301 node_conditions.go:123] node cpu capacity is 2
	I0108 20:12:52.607871  639301 node_conditions.go:105] duration metric: took 4.117014ms to run NodePressure ...
	I0108 20:12:52.607884  639301 start.go:228] waiting for startup goroutines ...
	I0108 20:12:52.639470  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:52.894484  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:53.138162  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:53.397119  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:53.637376  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:53.895090  639301 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:54.136619  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:54.395545  639301 kapi.go:107] duration metric: took 1m20.009201333s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 20:12:54.639593  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:55.148225  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:55.637932  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:56.137840  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:56.637031  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:57.137513  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:57.636526  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:58.136856  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:58.646006  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:59.137917  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:59.637661  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:00.136821  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:00.637826  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:01.138831  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:01.639094  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:02.138055  639301 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:02.636699  639301 kapi.go:107] duration metric: took 1m28.005979534s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 20:13:02.639469  639301 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0108 20:13:02.641938  639301 addons.go:508] enable addons completed in 1m34.751764392s: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0108 20:13:02.641989  639301 start.go:233] waiting for cluster config update ...
	I0108 20:13:02.642009  639301 start.go:242] writing updated cluster config ...
	I0108 20:13:02.642328  639301 ssh_runner.go:195] Run: rm -f paused
	I0108 20:13:03.003382  639301 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:13:03.006230  639301 out.go:177] * Done! kubectl is now configured to use "addons-888287" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 20:17:12 addons-888287 crio[891]: time="2024-01-08 20:17:12.128844342Z" level=info msg="Starting container: 2a15f6e84868d914a7c0e759dfd1a0e6744594d45fa40e7d8e5d91de9d596ef4" id=05744118-adb0-4f77-a3df-e27276f30803 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 20:17:12 addons-888287 conmon[8449]: conmon 2a15f6e84868d914a7c0 <ninfo>: container 8460 exited with status 1
	Jan 08 20:17:12 addons-888287 crio[891]: time="2024-01-08 20:17:12.142380763Z" level=info msg="Started container" PID=8460 containerID=2a15f6e84868d914a7c0e759dfd1a0e6744594d45fa40e7d8e5d91de9d596ef4 description=default/hello-world-app-5d77478584-jpcn9/hello-world-app id=05744118-adb0-4f77-a3df-e27276f30803 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebe812b21920317706a527ebed20e7a7a74c368420edb6db5413a1727d514164
	Jan 08 20:17:12 addons-888287 crio[891]: time="2024-01-08 20:17:12.507777203Z" level=info msg="Removing container: 046214d1596318cc5eef875c81d49615c444941ebf96332c703bb2463dcd28fc" id=d1e9c363-6fb0-45dd-b95d-616cc0c27ae5 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:17:12 addons-888287 crio[891]: time="2024-01-08 20:17:12.529652985Z" level=info msg="Removed container 046214d1596318cc5eef875c81d49615c444941ebf96332c703bb2463dcd28fc: default/hello-world-app-5d77478584-jpcn9/hello-world-app" id=d1e9c363-6fb0-45dd-b95d-616cc0c27ae5 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.386141150Z" level=info msg="Removing container: 917eca6745e37dfdccfb8c9fe101f92d6412ac3439b45768076e4edd949760c9" id=f5595519-8763-4bdb-a1c0-9225d08dc257 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.416190302Z" level=info msg="Removed container 917eca6745e37dfdccfb8c9fe101f92d6412ac3439b45768076e4edd949760c9: ingress-nginx/ingress-nginx-admission-patch-x9d8g/patch" id=f5595519-8763-4bdb-a1c0-9225d08dc257 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.419758788Z" level=info msg="Removing container: 616cd37afc9cd08602e86abb0e09072eb14fc210b41b6ee2092d65b72cfe7816" id=41bb2010-821d-4f62-98be-bc581aa0bd9a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.444643856Z" level=info msg="Removed container 616cd37afc9cd08602e86abb0e09072eb14fc210b41b6ee2092d65b72cfe7816: ingress-nginx/ingress-nginx-admission-create-d54rz/create" id=41bb2010-821d-4f62-98be-bc581aa0bd9a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.446127116Z" level=info msg="Stopping pod sandbox: 53f3c260227566324e2f910e8b2411d33e4f4b6c9cbd08e3924fe9d1ea8ae09d" id=c3b81399-190d-4924-9727-c8e365282925 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.446160799Z" level=info msg="Stopped pod sandbox (already stopped): 53f3c260227566324e2f910e8b2411d33e4f4b6c9cbd08e3924fe9d1ea8ae09d" id=c3b81399-190d-4924-9727-c8e365282925 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.446573503Z" level=info msg="Removing pod sandbox: 53f3c260227566324e2f910e8b2411d33e4f4b6c9cbd08e3924fe9d1ea8ae09d" id=1ab371c7-f84b-4862-b130-b4aa38378d79 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.468341469Z" level=info msg="Removed pod sandbox: 53f3c260227566324e2f910e8b2411d33e4f4b6c9cbd08e3924fe9d1ea8ae09d" id=1ab371c7-f84b-4862-b130-b4aa38378d79 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.468994159Z" level=info msg="Stopping pod sandbox: d4acc15a50cb9d7c1cea097606694bfa1792bf9f6c9a4615ce81b6da1734cdcf" id=3489b401-f704-40da-a43a-540d0324679d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.469126558Z" level=info msg="Stopped pod sandbox (already stopped): d4acc15a50cb9d7c1cea097606694bfa1792bf9f6c9a4615ce81b6da1734cdcf" id=3489b401-f704-40da-a43a-540d0324679d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.469511398Z" level=info msg="Removing pod sandbox: d4acc15a50cb9d7c1cea097606694bfa1792bf9f6c9a4615ce81b6da1734cdcf" id=a3ac420a-931f-4679-869f-ee47ee5f3276 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.483027010Z" level=info msg="Removed pod sandbox: d4acc15a50cb9d7c1cea097606694bfa1792bf9f6c9a4615ce81b6da1734cdcf" id=a3ac420a-931f-4679-869f-ee47ee5f3276 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.483667253Z" level=info msg="Stopping pod sandbox: 3853472653c0ed86f931ba7b74c94032f1cabe6b1071ac43276ea5dc73873582" id=91adac1f-9ffa-46b7-88e9-a663cd6395ed name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.483780657Z" level=info msg="Stopped pod sandbox (already stopped): 3853472653c0ed86f931ba7b74c94032f1cabe6b1071ac43276ea5dc73873582" id=91adac1f-9ffa-46b7-88e9-a663cd6395ed name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.484150120Z" level=info msg="Removing pod sandbox: 3853472653c0ed86f931ba7b74c94032f1cabe6b1071ac43276ea5dc73873582" id=58e0a7f5-24f7-4d66-9684-eeef707b2b9b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.493383272Z" level=info msg="Removed pod sandbox: 3853472653c0ed86f931ba7b74c94032f1cabe6b1071ac43276ea5dc73873582" id=58e0a7f5-24f7-4d66-9684-eeef707b2b9b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.493929270Z" level=info msg="Stopping pod sandbox: 44161a547a3d4ea1293e039354f0287f45d6399ebefc6772ebc75a4ee6f0d4a2" id=fd9f9648-3e7a-4804-9bea-1d44fd5296ec name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.494122921Z" level=info msg="Stopped pod sandbox (already stopped): 44161a547a3d4ea1293e039354f0287f45d6399ebefc6772ebc75a4ee6f0d4a2" id=fd9f9648-3e7a-4804-9bea-1d44fd5296ec name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.494478460Z" level=info msg="Removing pod sandbox: 44161a547a3d4ea1293e039354f0287f45d6399ebefc6772ebc75a4ee6f0d4a2" id=0c5ded02-a4c7-43bc-83af-015a2d5bedf1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 08 20:17:15 addons-888287 crio[891]: time="2024-01-08 20:17:15.503845053Z" level=info msg="Removed pod sandbox: 44161a547a3d4ea1293e039354f0287f45d6399ebefc6772ebc75a4ee6f0d4a2" id=0c5ded02-a4c7-43bc-83af-015a2d5bedf1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a15f6e84868d       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                               4 seconds ago       Exited              hello-world-app           2                   ebe812b219203       hello-world-app-5d77478584-jpcn9
	85db487d30bbc       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                2 minutes ago       Running             nginx                     0                   f1bc0e7509a0d       nginx
	dd1878bfa723b       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce          4 minutes ago       Running             headlamp                  0                   dc5644ff7759f       headlamp-7ddfbb94ff-mdbs6
	7f78fe286f06f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa   4 minutes ago       Running             gcp-auth                  0                   bb8472b2d1f82       gcp-auth-d4c87556c-c9kbj
	749fd8079b903       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                5 minutes ago       Running             yakd                      0                   efb2fd507dd7b       yakd-dashboard-9947fc6bf-nr6ck
	589820867e149       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                               5 minutes ago       Running             coredns                   0                   bacb021ce5c8b       coredns-5dd5756b68-ppg77
	206915f13f840       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               5 minutes ago       Running             storage-provisioner       0                   3e12814b1d758       storage-provisioner
	f05e2a20254bf       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                               5 minutes ago       Running             kube-proxy                0                   ce206514993e5       kube-proxy-rgq7f
	3ebfecb7a1f05       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                               5 minutes ago       Running             kindnet-cni               0                   40e689b8e6b4e       kindnet-ql4pm
	04edb30ddd2e9       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                               6 minutes ago       Running             kube-scheduler            0                   a8edddb435359       kube-scheduler-addons-888287
	1a45f6f09297c       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                               6 minutes ago       Running             kube-apiserver            0                   495b5e8f7a4a2       kube-apiserver-addons-888287
	1a7a4382188be       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                               6 minutes ago       Running             kube-controller-manager   0                   2dec046be8aed       kube-controller-manager-addons-888287
	189bd604b67d0       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                               6 minutes ago       Running             etcd                      0                   94f7d213255f5       etcd-addons-888287
	
	
	==> coredns [589820867e149df675811d68387d2e4261a5f54531b2ff66755be48669eaebac] <==
	[INFO] 10.244.0.20:55523 - 29119 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000220679s
	[INFO] 10.244.0.20:33048 - 28092 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002070997s
	[INFO] 10.244.0.20:55523 - 12866 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001848055s
	[INFO] 10.244.0.20:55523 - 27579 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005492159s
	[INFO] 10.244.0.20:33048 - 23262 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005827013s
	[INFO] 10.244.0.20:55523 - 32525 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000105748s
	[INFO] 10.244.0.20:33048 - 2247 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036661s
	[INFO] 10.244.0.20:32879 - 18228 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000107545s
	[INFO] 10.244.0.20:54314 - 60543 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000255641s
	[INFO] 10.244.0.20:54314 - 29215 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068678s
	[INFO] 10.244.0.20:32879 - 40423 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003392s
	[INFO] 10.244.0.20:32879 - 23863 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059192s
	[INFO] 10.244.0.20:54314 - 51246 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051577s
	[INFO] 10.244.0.20:32879 - 36005 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006807s
	[INFO] 10.244.0.20:32879 - 44377 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061727s
	[INFO] 10.244.0.20:32879 - 3900 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056452s
	[INFO] 10.244.0.20:54314 - 13937 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058183s
	[INFO] 10.244.0.20:54314 - 8255 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000120863s
	[INFO] 10.244.0.20:32879 - 19087 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001203504s
	[INFO] 10.244.0.20:54314 - 3871 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080928s
	[INFO] 10.244.0.20:32879 - 49551 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000995125s
	[INFO] 10.244.0.20:54314 - 33266 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000874492s
	[INFO] 10.244.0.20:32879 - 41727 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056599s
	[INFO] 10.244.0.20:54314 - 48194 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000826968s
	[INFO] 10.244.0.20:54314 - 7786 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044398s
	
	
	==> describe nodes <==
	Name:               addons-888287
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-888287
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=addons-888287
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_11_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-888287
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:11:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-888287
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:17:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:14:49 +0000   Mon, 08 Jan 2024 20:11:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:14:49 +0000   Mon, 08 Jan 2024 20:11:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:14:49 +0000   Mon, 08 Jan 2024 20:11:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:14:49 +0000   Mon, 08 Jan 2024 20:11:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-888287
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ecad1d1eb7147db9610ba6f9d1c6a4d
	  System UUID:                b356a148-9a8e-4628-b7c4-c554fed25996
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-jpcn9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-d4c87556c-c9kbj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  headlamp                    headlamp-7ddfbb94ff-mdbs6                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 coredns-5dd5756b68-ppg77                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m49s
	  kube-system                 etcd-addons-888287                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m2s
	  kube-system                 kindnet-ql4pm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m49s
	  kube-system                 kube-apiserver-addons-888287             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-controller-manager-addons-888287    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-proxy-rgq7f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-scheduler-addons-888287             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-nr6ck           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m42s                kube-proxy       
	  Normal  Starting                 6m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node addons-888287 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node addons-888287 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x8 over 6m9s)  kubelet          Node addons-888287 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m1s                 kubelet          Node addons-888287 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s                 kubelet          Node addons-888287 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s                 kubelet          Node addons-888287 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m50s                node-controller  Node addons-888287 event: Registered Node addons-888287 in Controller
	  Normal  NodeReady                5m18s                kubelet          Node addons-888287 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001276] FS-Cache: O-key=[8] '976eed0000000000'
	[  +0.000761] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000fa08827a
	[  +0.001201] FS-Cache: N-key=[8] '976eed0000000000'
	[  +0.006041] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001100] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=000000006f7ae58b
	[  +0.001081] FS-Cache: O-key=[8] '976eed0000000000'
	[  +0.000745] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000d63010af
	[  +0.001227] FS-Cache: N-key=[8] '976eed0000000000'
	[  +2.377938] FS-Cache: Duplicate cookie detected
	[  +0.000792] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001112] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=0000000089d8152c
	[  +0.001157] FS-Cache: O-key=[8] '966eed0000000000'
	[  +0.000775] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001021] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000fa08827a
	[  +0.001152] FS-Cache: N-key=[8] '966eed0000000000'
	[  +0.402011] FS-Cache: Duplicate cookie detected
	[  +0.000847] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001178] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=0000000045168e1b
	[  +0.001226] FS-Cache: O-key=[8] '9c6eed0000000000'
	[  +0.000809] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001087] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000b4acb842
	[  +0.001272] FS-Cache: N-key=[8] '9c6eed0000000000'
	
	
	==> etcd [189bd604b67d0f20da68ce144e77acac12921063e2ed5756fd1308822929654f] <==
	{"level":"info","ts":"2024-01-08T20:11:08.668238Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T20:11:08.668337Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-08T20:11:08.668352Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-08T20:11:08.678727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-08T20:11:08.678851Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-08T20:11:08.934548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T20:11:08.934598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T20:11:08.934613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-08T20:11:08.934637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:11:08.934643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:11:08.934653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-08T20:11:08.934661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T20:11:08.938597Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-888287 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:11:08.93872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:11:08.938883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:11:08.940906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-08T20:11:08.94098Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:11:08.940993Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:11:08.950524Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:11:08.994013Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:11:09.004891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:11:09.018571Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:11:09.018644Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:11:28.997432Z","caller":"traceutil/trace.go:171","msg":"trace[1842705800] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"126.11663ms","start":"2024-01-08T20:11:28.871301Z","end":"2024-01-08T20:11:28.997418Z","steps":["trace[1842705800] 'process raft request'  (duration: 99.49594ms)","trace[1842705800] 'compare'  (duration: 26.257209ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T20:11:31.674824Z","caller":"traceutil/trace.go:171","msg":"trace[1762001204] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"122.155354ms","start":"2024-01-08T20:11:31.552652Z","end":"2024-01-08T20:11:31.674808Z","steps":["trace[1762001204] 'process raft request'  (duration: 95.035575ms)","trace[1762001204] 'compare'  (duration: 26.851691ms)"],"step_count":2}
	
	
	==> gcp-auth [7f78fe286f06f6e8d435d64fe4f8cbdb5473c50a5731fea48fb753ae045ba94a] <==
	2024/01/08 20:12:46 GCP Auth Webhook started!
	2024/01/08 20:13:04 Ready to marshal response ...
	2024/01/08 20:13:04 Ready to write response ...
	2024/01/08 20:13:04 Ready to marshal response ...
	2024/01/08 20:13:04 Ready to write response ...
	2024/01/08 20:13:04 Ready to marshal response ...
	2024/01/08 20:13:04 Ready to write response ...
	2024/01/08 20:13:14 Ready to marshal response ...
	2024/01/08 20:13:14 Ready to write response ...
	2024/01/08 20:13:23 Ready to marshal response ...
	2024/01/08 20:13:23 Ready to write response ...
	2024/01/08 20:13:23 Ready to marshal response ...
	2024/01/08 20:13:23 Ready to write response ...
	2024/01/08 20:13:32 Ready to marshal response ...
	2024/01/08 20:13:32 Ready to write response ...
	2024/01/08 20:13:41 Ready to marshal response ...
	2024/01/08 20:13:41 Ready to write response ...
	2024/01/08 20:14:14 Ready to marshal response ...
	2024/01/08 20:14:14 Ready to write response ...
	2024/01/08 20:14:30 Ready to marshal response ...
	2024/01/08 20:14:30 Ready to write response ...
	2024/01/08 20:16:51 Ready to marshal response ...
	2024/01/08 20:16:51 Ready to write response ...
	
	
	==> kernel <==
	 20:17:17 up  2:59,  0 users,  load average: 0.88, 0.97, 1.28
	Linux addons-888287 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [3ebfecb7a1f058acb65fd0168734dc572f3800773595967c93228f364e060226] <==
	I0108 20:15:08.530274       1 main.go:227] handling current node
	I0108 20:15:18.534763       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:15:18.534790       1 main.go:227] handling current node
	I0108 20:15:28.538355       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:15:28.538386       1 main.go:227] handling current node
	I0108 20:15:38.543113       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:15:38.543143       1 main.go:227] handling current node
	I0108 20:15:48.555865       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:15:48.555891       1 main.go:227] handling current node
	I0108 20:15:58.562135       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:15:58.562164       1 main.go:227] handling current node
	I0108 20:16:08.573180       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:16:08.573211       1 main.go:227] handling current node
	I0108 20:16:18.577181       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:16:18.577210       1 main.go:227] handling current node
	I0108 20:16:28.582502       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:16:28.582633       1 main.go:227] handling current node
	I0108 20:16:38.595424       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:16:38.595451       1 main.go:227] handling current node
	I0108 20:16:48.607599       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:16:48.607627       1 main.go:227] handling current node
	I0108 20:16:58.613005       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:16:58.613031       1 main.go:227] handling current node
	I0108 20:17:08.623420       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:17:08.623447       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1a45f6f09297c8aaa703087c5626bbd360b1270847522ee01a5574ec98a7f2f0] <==
	I0108 20:14:22.370143       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0108 20:14:23.396253       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0108 20:14:29.975646       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:29.975697       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:29.994347       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:29.994512       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:30.010275       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:30.010330       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:30.032196       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:30.032255       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:30.040036       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:30.040192       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:30.058156       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:30.058296       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:30.085464       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:30.085511       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:30.090996       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:14:30.091047       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:14:30.671348       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0108 20:14:31.024648       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.34.124"}
	W0108 20:14:31.033012       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 20:14:31.091142       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 20:14:31.108211       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 20:14:39.209223       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0108 20:16:51.567997       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.89.130"}
	
	
	==> kube-controller-manager [1a7a4382188bef58b2ec6e0a3f55060523c1ed485459828a3f30aaf2ab37f292] <==
	W0108 20:16:11.970202       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:16:11.970236       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:16:31.393416       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:16:31.393450       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:16:37.950515       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:16:37.950551       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:16:42.346589       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:16:42.346627       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 20:16:51.285111       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0108 20:16:51.305794       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-jpcn9"
	I0108 20:16:51.319687       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.296203ms"
	I0108 20:16:51.335192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.924533ms"
	I0108 20:16:51.335357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.463µs"
	I0108 20:16:51.345514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.459µs"
	I0108 20:16:54.476413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.232µs"
	I0108 20:16:55.475917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.213µs"
	I0108 20:16:56.476840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="72.247µs"
	W0108 20:17:06.444430       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:17:06.444460       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 20:17:08.252367       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 20:17:08.259131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="5.104µs"
	I0108 20:17:08.264828       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0108 20:17:09.373475       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:17:09.373507       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 20:17:12.519750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="76.776µs"
	
	
	==> kube-proxy [f05e2a20254bf733af8249592a49862fcc308c6d35b850bf990340aa1ad173b3] <==
	I0108 20:11:30.119201       1 server_others.go:69] "Using iptables proxy"
	I0108 20:11:31.226110       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 20:11:33.883139       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:11:33.925828       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:11:33.925994       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:11:33.926095       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:11:33.926229       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:11:33.927063       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:11:33.927570       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:11:33.933075       1 config.go:188] "Starting service config controller"
	I0108 20:11:33.933173       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:11:33.933984       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:11:33.934033       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:11:33.934403       1 config.go:315] "Starting node config controller"
	I0108 20:11:33.934410       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:11:34.037052       1 shared_informer.go:318] Caches are synced for node config
	I0108 20:11:34.037295       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:11:34.037306       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [04edb30ddd2e9ba9eff0c8a43dd874a71840d657bd07b1d549b1fd37d4330d1d] <==
	W0108 20:11:11.997805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:11:11.997819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:11:11.998692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:11:11.998834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:11:11.999013       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:11:11.999251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:11:11.999068       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:11:11.999337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 20:11:11.999115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:11:11.999421       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:11:11.999164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:11:11.999506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:11:11.998801       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 20:11:11.999582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 20:11:12.810952       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:11:12.810989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:11:12.907240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:11:12.907284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:11:12.947441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:11:12.947565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 20:11:12.952845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:11:12.952942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 20:11:13.049076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 20:11:13.049187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0108 20:11:13.383667       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 20:17:11 addons-888287 kubelet[1352]: I0108 20:17:11.496454    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk792\" (UniqueName: \"kubernetes.io/projected/69bb57f8-599c-4db9-a135-cac42e564aac-kube-api-access-dk792\") pod \"69bb57f8-599c-4db9-a135-cac42e564aac\" (UID: \"69bb57f8-599c-4db9-a135-cac42e564aac\") "
	Jan 08 20:17:11 addons-888287 kubelet[1352]: I0108 20:17:11.497974    1352 scope.go:117] "RemoveContainer" containerID="f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35"
	Jan 08 20:17:11 addons-888287 kubelet[1352]: I0108 20:17:11.502596    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69bb57f8-599c-4db9-a135-cac42e564aac-kube-api-access-dk792" (OuterVolumeSpecName: "kube-api-access-dk792") pod "69bb57f8-599c-4db9-a135-cac42e564aac" (UID: "69bb57f8-599c-4db9-a135-cac42e564aac"). InnerVolumeSpecName "kube-api-access-dk792". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 20:17:11 addons-888287 kubelet[1352]: I0108 20:17:11.503218    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bb57f8-599c-4db9-a135-cac42e564aac-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "69bb57f8-599c-4db9-a135-cac42e564aac" (UID: "69bb57f8-599c-4db9-a135-cac42e564aac"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:17:11 addons-888287 kubelet[1352]: I0108 20:17:11.521939    1352 scope.go:117] "RemoveContainer" containerID="f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35"
	Jan 08 20:17:11 addons-888287 kubelet[1352]: E0108 20:17:11.523262    1352 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35\": container with ID starting with f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35 not found: ID does not exist" containerID="f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35"
	Jan 08 20:17:11 addons-888287 kubelet[1352]: I0108 20:17:11.523314    1352 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35"} err="failed to get container status \"f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35\": rpc error: code = NotFound desc = could not find container \"f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35\": container with ID starting with f2fdb92c8219d7e26f925f4d6e4128d25f325d0a345c5d2810049eef8a62ea35 not found: ID does not exist"
	Jan 08 20:17:11 addons-888287 kubelet[1352]: I0108 20:17:11.596856    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dk792\" (UniqueName: \"kubernetes.io/projected/69bb57f8-599c-4db9-a135-cac42e564aac-kube-api-access-dk792\") on node \"addons-888287\" DevicePath \"\""
	Jan 08 20:17:11 addons-888287 kubelet[1352]: I0108 20:17:11.596895    1352 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/69bb57f8-599c-4db9-a135-cac42e564aac-webhook-cert\") on node \"addons-888287\" DevicePath \"\""
	Jan 08 20:17:12 addons-888287 kubelet[1352]: I0108 20:17:12.058282    1352 scope.go:117] "RemoveContainer" containerID="046214d1596318cc5eef875c81d49615c444941ebf96332c703bb2463dcd28fc"
	Jan 08 20:17:12 addons-888287 kubelet[1352]: I0108 20:17:12.502575    1352 scope.go:117] "RemoveContainer" containerID="046214d1596318cc5eef875c81d49615c444941ebf96332c703bb2463dcd28fc"
	Jan 08 20:17:12 addons-888287 kubelet[1352]: I0108 20:17:12.503138    1352 scope.go:117] "RemoveContainer" containerID="2a15f6e84868d914a7c0e759dfd1a0e6744594d45fa40e7d8e5d91de9d596ef4"
	Jan 08 20:17:12 addons-888287 kubelet[1352]: E0108 20:17:12.503418    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-jpcn9_default(ed0d8fba-b1c6-4cc5-9b6c-109bd0584a21)\"" pod="default/hello-world-app-5d77478584-jpcn9" podUID="ed0d8fba-b1c6-4cc5-9b6c-109bd0584a21"
	Jan 08 20:17:13 addons-888287 kubelet[1352]: I0108 20:17:13.059357    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="69bb57f8-599c-4db9-a135-cac42e564aac" path="/var/lib/kubelet/pods/69bb57f8-599c-4db9-a135-cac42e564aac/volumes"
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.230479    1352 manager.go:1106] Failed to create existing container: /crio-c0644890e95aa9e889921af506ae9e50ce094bebe4445b361c2959d64b60b3cd: Error finding container c0644890e95aa9e889921af506ae9e50ce094bebe4445b361c2959d64b60b3cd: Status 404 returned error can't find the container with id c0644890e95aa9e889921af506ae9e50ce094bebe4445b361c2959d64b60b3cd
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.231124    1352 manager.go:1106] Failed to create existing container: /docker/6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67/crio-2cb4bf376a5549ec0e13af32b1b04ab20664f539cd84f11a116d64f112ee9b75: Error finding container 2cb4bf376a5549ec0e13af32b1b04ab20664f539cd84f11a116d64f112ee9b75: Status 404 returned error can't find the container with id 2cb4bf376a5549ec0e13af32b1b04ab20664f539cd84f11a116d64f112ee9b75
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.231374    1352 manager.go:1106] Failed to create existing container: /crio-2cb4bf376a5549ec0e13af32b1b04ab20664f539cd84f11a116d64f112ee9b75: Error finding container 2cb4bf376a5549ec0e13af32b1b04ab20664f539cd84f11a116d64f112ee9b75: Status 404 returned error can't find the container with id 2cb4bf376a5549ec0e13af32b1b04ab20664f539cd84f11a116d64f112ee9b75
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.231590    1352 manager.go:1106] Failed to create existing container: /docker/6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67/crio-c0644890e95aa9e889921af506ae9e50ce094bebe4445b361c2959d64b60b3cd: Error finding container c0644890e95aa9e889921af506ae9e50ce094bebe4445b361c2959d64b60b3cd: Status 404 returned error can't find the container with id c0644890e95aa9e889921af506ae9e50ce094bebe4445b361c2959d64b60b3cd
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.231840    1352 manager.go:1106] Failed to create existing container: /crio-9981417bce88f6640e91a807843800aab4425d5039fb7754efe5a2655d89e018: Error finding container 9981417bce88f6640e91a807843800aab4425d5039fb7754efe5a2655d89e018: Status 404 returned error can't find the container with id 9981417bce88f6640e91a807843800aab4425d5039fb7754efe5a2655d89e018
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.233066    1352 manager.go:1106] Failed to create existing container: /docker/6f990fac2af108669a80d387eb72bca995de3fbc7e9cf5f792812e13d4a5be67/crio-1ad3a7b52841848bcb5abc48915fd66a706238cc430a0bee422cc802f7c40a77: Error finding container 1ad3a7b52841848bcb5abc48915fd66a706238cc430a0bee422cc802f7c40a77: Status 404 returned error can't find the container with id 1ad3a7b52841848bcb5abc48915fd66a706238cc430a0bee422cc802f7c40a77
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.233227    1352 manager.go:1106] Failed to create existing container: /crio-1ad3a7b52841848bcb5abc48915fd66a706238cc430a0bee422cc802f7c40a77: Error finding container 1ad3a7b52841848bcb5abc48915fd66a706238cc430a0bee422cc802f7c40a77: Status 404 returned error can't find the container with id 1ad3a7b52841848bcb5abc48915fd66a706238cc430a0bee422cc802f7c40a77
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.296701    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6e0fca294d3fd7f6a7ff6f52b77d281da25b4a960e342ca2abe3b2b54ad9f021/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6e0fca294d3fd7f6a7ff6f52b77d281da25b4a960e342ca2abe3b2b54ad9f021/diff: no such file or directory, extraDiskErr: <nil>
	Jan 08 20:17:15 addons-888287 kubelet[1352]: E0108 20:17:15.306089    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6e0fca294d3fd7f6a7ff6f52b77d281da25b4a960e342ca2abe3b2b54ad9f021/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6e0fca294d3fd7f6a7ff6f52b77d281da25b4a960e342ca2abe3b2b54ad9f021/diff: no such file or directory, extraDiskErr: <nil>
	Jan 08 20:17:15 addons-888287 kubelet[1352]: I0108 20:17:15.384774    1352 scope.go:117] "RemoveContainer" containerID="917eca6745e37dfdccfb8c9fe101f92d6412ac3439b45768076e4edd949760c9"
	Jan 08 20:17:15 addons-888287 kubelet[1352]: I0108 20:17:15.416598    1352 scope.go:117] "RemoveContainer" containerID="616cd37afc9cd08602e86abb0e09072eb14fc210b41b6ee2092d65b72cfe7816"
	
	
	==> storage-provisioner [206915f13f84052dac6332d81074703deafa2d7043fdf3c17e62b4cac2f587b4] <==
	I0108 20:11:59.403763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:11:59.473988       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:11:59.474178       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:11:59.519840       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:11:59.520022       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-888287_d4f6c69b-3a21-4964-9422-a2ad601d5af2!
	I0108 20:11:59.528957       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"44f9dd10-fe0a-47be-8995-a53359cc0df3", APIVersion:"v1", ResourceVersion:"933", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-888287_d4f6c69b-3a21-4964-9422-a2ad601d5af2 became leader
	I0108 20:11:59.620342       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-888287_d4f6c69b-3a21-4964-9422-a2ad601d5af2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-888287 -n addons-888287
helpers_test.go:261: (dbg) Run:  kubectl --context addons-888287 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (167.77s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (212.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-105176 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-105176 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.285364557s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-105176 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-105176 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fc42b27d-a783-4fc7-9b0d-ee989b3546ac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fc42b27d-a783-4fc7-9b0d-ee989b3546ac] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 39.003272361s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-105176 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0108 20:26:26.377683  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:26.382933  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:26.393173  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:26.413466  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:26.453705  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:26.534035  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:26.694378  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:27.014939  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:27.655819  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:28.936640  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:31.497620  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:36.618754  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:26:46.858978  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:27:07.339192  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-105176 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.824522711s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-105176 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-105176 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.010782361s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-105176 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-105176 addons disable ingress-dns --alsologtostderr -v=1: (2.148724567s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-105176 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-105176 addons disable ingress --alsologtostderr -v=1: (7.547873888s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-105176
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-105176:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74237e1c64e65329f42f2bea4f9417c81f10edabd08bf3d38bcbfda8543bc42c",
	        "Created": "2024-01-08T20:22:47.029665293Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 666720,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:22:47.346024946Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/74237e1c64e65329f42f2bea4f9417c81f10edabd08bf3d38bcbfda8543bc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74237e1c64e65329f42f2bea4f9417c81f10edabd08bf3d38bcbfda8543bc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/74237e1c64e65329f42f2bea4f9417c81f10edabd08bf3d38bcbfda8543bc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/74237e1c64e65329f42f2bea4f9417c81f10edabd08bf3d38bcbfda8543bc42c/74237e1c64e65329f42f2bea4f9417c81f10edabd08bf3d38bcbfda8543bc42c-json.log",
	        "Name": "/ingress-addon-legacy-105176",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-105176:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-105176",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c30528ea7ab49c0ee7a4608bc4f2bf2cb96428de5c14fbe106a9dc66093a0714-init/diff:/var/lib/docker/overlay2/6dc70d5fd4ec367ecfc7dc99fc7bcaf35d9752c3024a41d78b490451f211e3b4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c30528ea7ab49c0ee7a4608bc4f2bf2cb96428de5c14fbe106a9dc66093a0714/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c30528ea7ab49c0ee7a4608bc4f2bf2cb96428de5c14fbe106a9dc66093a0714/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c30528ea7ab49c0ee7a4608bc4f2bf2cb96428de5c14fbe106a9dc66093a0714/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-105176",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-105176/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-105176",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-105176",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-105176",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d2cf9ff8bda41cc98600d303f72775059c0b7ac2e1296fad904f39109d680ecb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d2cf9ff8bda4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-105176": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74237e1c64e6",
	                        "ingress-addon-legacy-105176"
	                    ],
	                    "NetworkID": "e439d5193b970ed2cf2c0d63d1006a5ecf4cdaa20f6c07623424ad529ff17e74",
	                    "EndpointID": "a0d91c89bb1dee7e2da556f3fc5ac1a42b707242c1f684ac84fcf396302c4881",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-105176 -n ingress-addon-legacy-105176
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-105176 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-105176 logs -n 25: (1.375694337s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-735851 ssh findmnt                                          | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-735851                                                   | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1713408707/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-735851                                                   | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1713408707/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-735851 ssh findmnt                                          | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-735851 ssh findmnt                                          | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-735851 ssh findmnt                                          | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-735851                                                   | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-735851                                                      | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-735851                                                      | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-735851                                                      | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-735851                                                      | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-735851                                                      | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-735851 ssh pgrep                                            | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-735851                                                      | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-735851 image build -t                                       | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | localhost/my-image:functional-735851                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-735851                                                      | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-735851 image ls                                             | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	| delete         | -p functional-735851                                                   | functional-735851           | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:22 UTC |
	| start          | -p ingress-addon-legacy-105176                                         | ingress-addon-legacy-105176 | jenkins | v1.32.0 | 08 Jan 24 20:22 UTC | 08 Jan 24 20:23 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-105176                                            | ingress-addon-legacy-105176 | jenkins | v1.32.0 | 08 Jan 24 20:23 UTC | 08 Jan 24 20:24 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-105176                                            | ingress-addon-legacy-105176 | jenkins | v1.32.0 | 08 Jan 24 20:24 UTC | 08 Jan 24 20:24 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-105176                                            | ingress-addon-legacy-105176 | jenkins | v1.32.0 | 08 Jan 24 20:25 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-105176 ip                                         | ingress-addon-legacy-105176 | jenkins | v1.32.0 | 08 Jan 24 20:27 UTC | 08 Jan 24 20:27 UTC |
	| addons         | ingress-addon-legacy-105176                                            | ingress-addon-legacy-105176 | jenkins | v1.32.0 | 08 Jan 24 20:27 UTC | 08 Jan 24 20:27 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-105176                                            | ingress-addon-legacy-105176 | jenkins | v1.32.0 | 08 Jan 24 20:27 UTC | 08 Jan 24 20:27 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:22:22
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:22:22.677966  666261 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:22:22.678197  666261 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:22:22.678224  666261 out.go:309] Setting ErrFile to fd 2...
	I0108 20:22:22.678243  666261 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:22:22.678572  666261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:22:22.679082  666261 out.go:303] Setting JSON to false
	I0108 20:22:22.680028  666261 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11085,"bootTime":1704734258,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:22:22.680137  666261 start.go:138] virtualization:  
	I0108 20:22:22.683757  666261 out.go:177] * [ingress-addon-legacy-105176] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:22:22.688108  666261 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:22:22.688233  666261 notify.go:220] Checking for updates...
	I0108 20:22:22.693978  666261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:22:22.696740  666261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:22:22.699430  666261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:22:22.701859  666261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:22:22.703814  666261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:22:22.706580  666261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:22:22.735172  666261 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:22:22.735300  666261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:22:22.815076  666261 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 20:22:22.8053365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:22:22.815185  666261 docker.go:295] overlay module found
	I0108 20:22:22.818024  666261 out.go:177] * Using the docker driver based on user configuration
	I0108 20:22:22.820787  666261 start.go:298] selected driver: docker
	I0108 20:22:22.820807  666261 start.go:902] validating driver "docker" against <nil>
	I0108 20:22:22.820821  666261 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:22:22.821442  666261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:22:22.891595  666261 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 20:22:22.882521295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:22:22.891767  666261 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:22:22.892009  666261 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:22:22.894405  666261 out.go:177] * Using Docker driver with root privileges
	I0108 20:22:22.896968  666261 cni.go:84] Creating CNI manager for ""
	I0108 20:22:22.896992  666261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:22:22.897011  666261 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:22:22.897028  666261 start_flags.go:323] config:
	{Name:ingress-addon-legacy-105176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-105176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:22:22.899771  666261 out.go:177] * Starting control plane node ingress-addon-legacy-105176 in cluster ingress-addon-legacy-105176
	I0108 20:22:22.901949  666261 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:22:22.904127  666261 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:22:22.906201  666261 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:22:22.906241  666261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:22:22.923253  666261 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:22:22.923281  666261 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:22:22.961303  666261 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0108 20:22:22.961323  666261 cache.go:56] Caching tarball of preloaded images
	I0108 20:22:22.961488  666261 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:22:22.964280  666261 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 20:22:22.966483  666261 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0108 20:22:23.080148  666261 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0108 20:22:39.051303  666261 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0108 20:22:39.051416  666261 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0108 20:22:40.387249  666261 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0108 20:22:40.387660  666261 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/config.json ...
	I0108 20:22:40.387706  666261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/config.json: {Name:mk6138c6f0240936a08fba0997ca831df8daf17d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:22:40.387916  666261 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:22:40.387977  666261 start.go:365] acquiring machines lock for ingress-addon-legacy-105176: {Name:mk2877702a5593ae9e1ac085f9df2360a27eda48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:22:40.388053  666261 start.go:369] acquired machines lock for "ingress-addon-legacy-105176" in 52.792µs
	I0108 20:22:40.388077  666261 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-105176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-105176 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:22:40.388149  666261 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:22:40.391027  666261 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0108 20:22:40.391277  666261 start.go:159] libmachine.API.Create for "ingress-addon-legacy-105176" (driver="docker")
	I0108 20:22:40.391303  666261 client.go:168] LocalClient.Create starting
	I0108 20:22:40.391397  666261 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem
	I0108 20:22:40.391433  666261 main.go:141] libmachine: Decoding PEM data...
	I0108 20:22:40.391452  666261 main.go:141] libmachine: Parsing certificate...
	I0108 20:22:40.391521  666261 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem
	I0108 20:22:40.391545  666261 main.go:141] libmachine: Decoding PEM data...
	I0108 20:22:40.391561  666261 main.go:141] libmachine: Parsing certificate...
	I0108 20:22:40.391961  666261 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-105176 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:22:40.410079  666261 cli_runner.go:211] docker network inspect ingress-addon-legacy-105176 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:22:40.410171  666261 network_create.go:281] running [docker network inspect ingress-addon-legacy-105176] to gather additional debugging logs...
	I0108 20:22:40.410197  666261 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-105176
	W0108 20:22:40.427475  666261 cli_runner.go:211] docker network inspect ingress-addon-legacy-105176 returned with exit code 1
	I0108 20:22:40.427512  666261 network_create.go:284] error running [docker network inspect ingress-addon-legacy-105176]: docker network inspect ingress-addon-legacy-105176: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-105176 not found
	I0108 20:22:40.427528  666261 network_create.go:286] output of [docker network inspect ingress-addon-legacy-105176]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-105176 not found
	
	** /stderr **
	I0108 20:22:40.427641  666261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:22:40.445020  666261 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400068eb10}
	I0108 20:22:40.445070  666261 network_create.go:124] attempt to create docker network ingress-addon-legacy-105176 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 20:22:40.445134  666261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-105176 ingress-addon-legacy-105176
	I0108 20:22:40.519370  666261 network_create.go:108] docker network ingress-addon-legacy-105176 192.168.49.0/24 created
	I0108 20:22:40.519402  666261 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-105176" container
	I0108 20:22:40.519480  666261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:22:40.536224  666261 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-105176 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-105176 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:22:40.555413  666261 oci.go:103] Successfully created a docker volume ingress-addon-legacy-105176
	I0108 20:22:40.555501  666261 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-105176-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-105176 --entrypoint /usr/bin/test -v ingress-addon-legacy-105176:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:22:42.072746  666261 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-105176-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-105176 --entrypoint /usr/bin/test -v ingress-addon-legacy-105176:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.51720291s)
	I0108 20:22:42.072778  666261 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-105176
	I0108 20:22:42.072799  666261 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:22:42.072819  666261 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:22:42.072921  666261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-105176:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:22:46.950254  666261 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-105176:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.877289461s)
	I0108 20:22:46.950287  666261 kic.go:203] duration metric: took 4.877464 seconds to extract preloaded images to volume
	W0108 20:22:46.950418  666261 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:22:46.950566  666261 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:22:47.013985  666261 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-105176 --name ingress-addon-legacy-105176 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-105176 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-105176 --network ingress-addon-legacy-105176 --ip 192.168.49.2 --volume ingress-addon-legacy-105176:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:22:47.354984  666261 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-105176 --format={{.State.Running}}
	I0108 20:22:47.375978  666261 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-105176 --format={{.State.Status}}
	I0108 20:22:47.401240  666261 cli_runner.go:164] Run: docker exec ingress-addon-legacy-105176 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:22:47.470154  666261 oci.go:144] the created container "ingress-addon-legacy-105176" has a running status.
	I0108 20:22:47.470234  666261 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa...
	I0108 20:22:48.298651  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 20:22:48.298759  666261 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:22:48.321706  666261 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-105176 --format={{.State.Status}}
	I0108 20:22:48.341739  666261 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:22:48.341762  666261 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-105176 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:22:48.410451  666261 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-105176 --format={{.State.Status}}
	I0108 20:22:48.432960  666261 machine.go:88] provisioning docker machine ...
	I0108 20:22:48.432991  666261 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-105176"
	I0108 20:22:48.433059  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:22:48.463303  666261 main.go:141] libmachine: Using SSH client type: native
	I0108 20:22:48.463736  666261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I0108 20:22:48.463760  666261 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-105176 && echo "ingress-addon-legacy-105176" | sudo tee /etc/hostname
	I0108 20:22:48.625062  666261 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-105176
	
	I0108 20:22:48.625154  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:22:48.644512  666261 main.go:141] libmachine: Using SSH client type: native
	I0108 20:22:48.644932  666261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I0108 20:22:48.644960  666261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-105176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-105176/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-105176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:22:48.783672  666261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:22:48.783701  666261 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-633350/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-633350/.minikube}
	I0108 20:22:48.783732  666261 ubuntu.go:177] setting up certificates
	I0108 20:22:48.783748  666261 provision.go:83] configureAuth start
	I0108 20:22:48.783816  666261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-105176
	I0108 20:22:48.802682  666261 provision.go:138] copyHostCerts
	I0108 20:22:48.802728  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:22:48.802762  666261 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem, removing ...
	I0108 20:22:48.802774  666261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:22:48.802852  666261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem (1082 bytes)
	I0108 20:22:48.802945  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:22:48.802968  666261 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem, removing ...
	I0108 20:22:48.802973  666261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:22:48.803004  666261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem (1123 bytes)
	I0108 20:22:48.803059  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:22:48.803079  666261 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem, removing ...
	I0108 20:22:48.803085  666261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:22:48.803115  666261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem (1679 bytes)
	I0108 20:22:48.803172  666261 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-105176 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-105176]
	I0108 20:22:49.600417  666261 provision.go:172] copyRemoteCerts
	I0108 20:22:49.600496  666261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:22:49.600563  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:22:49.621107  666261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa Username:docker}
	I0108 20:22:49.722140  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:22:49.722206  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:22:49.751410  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:22:49.751479  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:22:49.780041  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:22:49.780104  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0108 20:22:49.808663  666261 provision.go:86] duration metric: configureAuth took 1.02489523s
	I0108 20:22:49.808691  666261 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:22:49.808889  666261 config.go:182] Loaded profile config "ingress-addon-legacy-105176": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 20:22:49.808995  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:22:49.828439  666261 main.go:141] libmachine: Using SSH client type: native
	I0108 20:22:49.828866  666261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I0108 20:22:49.828889  666261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:22:50.108036  666261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:22:50.108073  666261 machine.go:91] provisioned docker machine in 1.675082686s
	I0108 20:22:50.108084  666261 client.go:171] LocalClient.Create took 9.716767765s
	I0108 20:22:50.108097  666261 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-105176" took 9.716821608s
	I0108 20:22:50.108111  666261 start.go:300] post-start starting for "ingress-addon-legacy-105176" (driver="docker")
	I0108 20:22:50.108121  666261 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:22:50.108187  666261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:22:50.108248  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:22:50.126313  666261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa Username:docker}
	I0108 20:22:50.225579  666261 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:22:50.229821  666261 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:22:50.229858  666261 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:22:50.229887  666261 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:22:50.229902  666261 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:22:50.229913  666261 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/addons for local assets ...
	I0108 20:22:50.229986  666261 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/files for local assets ...
	I0108 20:22:50.230072  666261 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> 6387322.pem in /etc/ssl/certs
	I0108 20:22:50.230084  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> /etc/ssl/certs/6387322.pem
	I0108 20:22:50.230204  666261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:22:50.240950  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:22:50.269243  666261 start.go:303] post-start completed in 161.115836ms
	I0108 20:22:50.269652  666261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-105176
	I0108 20:22:50.287295  666261 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/config.json ...
	I0108 20:22:50.287569  666261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:22:50.287629  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:22:50.308512  666261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa Username:docker}
	I0108 20:22:50.404476  666261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:22:50.410250  666261 start.go:128] duration metric: createHost completed in 10.022084858s
	I0108 20:22:50.410277  666261 start.go:83] releasing machines lock for "ingress-addon-legacy-105176", held for 10.022211333s
	I0108 20:22:50.410346  666261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-105176
	I0108 20:22:50.427563  666261 ssh_runner.go:195] Run: cat /version.json
	I0108 20:22:50.427574  666261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:22:50.427621  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:22:50.427645  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:22:50.449533  666261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa Username:docker}
	I0108 20:22:50.459994  666261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa Username:docker}
	I0108 20:22:50.678786  666261 ssh_runner.go:195] Run: systemctl --version
	I0108 20:22:50.684292  666261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:22:50.836825  666261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:22:50.842455  666261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:22:50.869031  666261 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:22:50.869105  666261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:22:50.908128  666261 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:22:50.908151  666261 start.go:475] detecting cgroup driver to use...
	I0108 20:22:50.908183  666261 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:22:50.908250  666261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:22:50.928302  666261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:22:50.941698  666261 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:22:50.941775  666261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:22:50.957263  666261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:22:50.974110  666261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:22:51.072455  666261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:22:51.187221  666261 docker.go:233] disabling docker service ...
	I0108 20:22:51.187290  666261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:22:51.209122  666261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:22:51.222572  666261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:22:51.319158  666261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:22:51.419858  666261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:22:51.432867  666261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:22:51.452581  666261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 20:22:51.452705  666261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:22:51.464098  666261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:22:51.464170  666261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:22:51.476070  666261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:22:51.487644  666261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:22:51.499472  666261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:22:51.510287  666261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:22:51.520260  666261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:22:51.530522  666261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:22:51.632229  666261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:22:51.770924  666261 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:22:51.771035  666261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:22:51.775540  666261 start.go:543] Will wait 60s for crictl version
	I0108 20:22:51.775618  666261 ssh_runner.go:195] Run: which crictl
	I0108 20:22:51.780116  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:22:51.825129  666261 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 20:22:51.825241  666261 ssh_runner.go:195] Run: crio --version
	I0108 20:22:51.868526  666261 ssh_runner.go:195] Run: crio --version
	I0108 20:22:51.917086  666261 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0108 20:22:51.918967  666261 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-105176 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:22:51.935879  666261 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:22:51.940519  666261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:22:51.953865  666261 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:22:51.953941  666261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:22:52.008051  666261 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 20:22:52.008137  666261 ssh_runner.go:195] Run: which lz4
	I0108 20:22:52.012870  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0108 20:22:52.012971  666261 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:22:52.017499  666261 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:22:52.017537  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0108 20:22:54.219883  666261 crio.go:444] Took 2.206945 seconds to copy over tarball
	I0108 20:22:54.219975  666261 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:22:56.840899  666261 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.620889854s)
	I0108 20:22:56.840925  666261 crio.go:451] Took 2.621014 seconds to extract the tarball
	I0108 20:22:56.840934  666261 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:22:56.925974  666261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:22:56.965652  666261 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 20:22:56.965680  666261 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 20:22:56.965759  666261 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:22:56.965946  666261 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:22:56.966030  666261 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:22:56.966097  666261 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:22:56.966166  666261 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:22:56.966249  666261 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 20:22:56.966316  666261 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:22:56.966390  666261 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 20:22:56.967322  666261 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 20:22:56.967746  666261 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:22:56.967915  666261 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:22:56.968055  666261 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:22:56.968281  666261 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:22:56.968432  666261 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 20:22:56.968565  666261 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:22:56.969141  666261 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W0108 20:22:57.275604  666261 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 20:22:57.275852  666261 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:22:57.288107  666261 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0108 20:22:57.302467  666261 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 20:22:57.302701  666261 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0108 20:22:57.308790  666261 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 20:22:57.309023  666261 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0108 20:22:57.314300  666261 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 20:22:57.314535  666261 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0108 20:22:57.325791  666261 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0108 20:22:57.326049  666261 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0108 20:22:57.361952  666261 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0108 20:22:57.362211  666261 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0108 20:22:57.368878  666261 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0108 20:22:57.368947  666261 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:22:57.369003  666261 ssh_runner.go:195] Run: which crictl
	I0108 20:22:57.397726  666261 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0108 20:22:57.397805  666261 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 20:22:57.397882  666261 ssh_runner.go:195] Run: which crictl
	W0108 20:22:57.456515  666261 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0108 20:22:57.456675  666261 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:22:57.459160  666261 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0108 20:22:57.459202  666261 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:22:57.459253  666261 ssh_runner.go:195] Run: which crictl
	I0108 20:22:57.467355  666261 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0108 20:22:57.467405  666261 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:22:57.467461  666261 ssh_runner.go:195] Run: which crictl
	I0108 20:22:57.491796  666261 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0108 20:22:57.491847  666261 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:22:57.491912  666261 ssh_runner.go:195] Run: which crictl
	I0108 20:22:57.515772  666261 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0108 20:22:57.515856  666261 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:22:57.515935  666261 ssh_runner.go:195] Run: which crictl
	I0108 20:22:57.589125  666261 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0108 20:22:57.589172  666261 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 20:22:57.589226  666261 ssh_runner.go:195] Run: which crictl
	I0108 20:22:57.589249  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:22:57.589327  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 20:22:57.643385  666261 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0108 20:22:57.643439  666261 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:22:57.643494  666261 ssh_runner.go:195] Run: which crictl
	I0108 20:22:57.643590  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:22:57.643665  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:22:57.643733  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:22:57.643810  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 20:22:57.643891  666261 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0108 20:22:57.677252  666261 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 20:22:57.677404  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 20:22:57.781569  666261 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0108 20:22:57.781685  666261 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:22:57.781807  666261 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 20:22:57.781872  666261 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 20:22:57.783845  666261 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 20:22:57.797529  666261 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0108 20:22:57.839389  666261 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 20:22:57.839522  666261 cache_images.go:92] LoadImages completed in 873.827828ms
	W0108 20:22:57.839603  666261 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0108 20:22:57.839713  666261 ssh_runner.go:195] Run: crio config
	I0108 20:22:57.903290  666261 cni.go:84] Creating CNI manager for ""
	I0108 20:22:57.903310  666261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:22:57.903341  666261 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:22:57.903363  666261 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-105176 NodeName:ingress-addon-legacy-105176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 20:22:57.903503  666261 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-105176"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:22:57.903576  666261 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-105176 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-105176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:22:57.903646  666261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 20:22:57.914157  666261 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:22:57.914262  666261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:22:57.924910  666261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0108 20:22:57.945970  666261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 20:22:57.966727  666261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0108 20:22:57.987126  666261 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:22:57.991599  666261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:22:58.004830  666261 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176 for IP: 192.168.49.2
	I0108 20:22:58.004866  666261 certs.go:190] acquiring lock for shared ca certs: {Name:mk28124a9f2c671691fce8a4307fb3ec09e97812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:22:58.005065  666261 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key
	I0108 20:22:58.005109  666261 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key
	I0108 20:22:58.005158  666261 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.key
	I0108 20:22:58.005179  666261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt with IP's: []
	I0108 20:22:58.204183  666261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt ...
	I0108 20:22:58.204213  666261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: {Name:mkeaef2728a04799fdf81ab45ad7d9a329216264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:22:58.204423  666261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.key ...
	I0108 20:22:58.204439  666261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.key: {Name:mk9a074ba77f04a957d8a30d6f015c16755333e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:22:58.204525  666261 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.key.dd3b5fb2
	I0108 20:22:58.204542  666261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:22:58.429206  666261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.crt.dd3b5fb2 ...
	I0108 20:22:58.429237  666261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.crt.dd3b5fb2: {Name:mkb9cf590f9f5dea82e886335d0043c27478a862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:22:58.429419  666261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.key.dd3b5fb2 ...
	I0108 20:22:58.429434  666261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.key.dd3b5fb2: {Name:mk365ca3f640c8db4ee0a5440653e1706b4fbe8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:22:58.429521  666261 certs.go:337] copying /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.crt
	I0108 20:22:58.429591  666261 certs.go:341] copying /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.key
	I0108 20:22:58.429653  666261 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.key
	I0108 20:22:58.429670  666261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.crt with IP's: []
	I0108 20:22:58.931157  666261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.crt ...
	I0108 20:22:58.931192  666261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.crt: {Name:mk23ae4a9aca2c3ccec31b368df3292dbe5751c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:22:58.931378  666261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.key ...
	I0108 20:22:58.931393  666261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.key: {Name:mkdea9f50d6eedced3be53bae0384e7d7054907f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:22:58.931480  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 20:22:58.931501  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 20:22:58.931512  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 20:22:58.931526  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 20:22:58.931543  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:22:58.931557  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:22:58.931569  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:22:58.931584  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:22:58.931632  666261 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem (1338 bytes)
	W0108 20:22:58.931672  666261 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732_empty.pem, impossibly tiny 0 bytes
	I0108 20:22:58.931688  666261 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:22:58.931716  666261 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:22:58.931749  666261 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:22:58.931780  666261 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem (1679 bytes)
	I0108 20:22:58.931837  666261 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:22:58.931872  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> /usr/share/ca-certificates/6387322.pem
	I0108 20:22:58.931889  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:22:58.931903  666261 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem -> /usr/share/ca-certificates/638732.pem
	I0108 20:22:58.932469  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:22:58.960456  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:22:58.988852  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:22:59.017051  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:22:59.044002  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:22:59.071539  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 20:22:59.099657  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:22:59.129252  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:22:59.158364  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /usr/share/ca-certificates/6387322.pem (1708 bytes)
	I0108 20:22:59.188143  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:22:59.216259  666261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem --> /usr/share/ca-certificates/638732.pem (1338 bytes)
	I0108 20:22:59.244655  666261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:22:59.265136  666261 ssh_runner.go:195] Run: openssl version
	I0108 20:22:59.271918  666261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/638732.pem && ln -fs /usr/share/ca-certificates/638732.pem /etc/ssl/certs/638732.pem"
	I0108 20:22:59.283298  666261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/638732.pem
	I0108 20:22:59.287814  666261 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:18 /usr/share/ca-certificates/638732.pem
	I0108 20:22:59.287881  666261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/638732.pem
	I0108 20:22:59.296352  666261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/638732.pem /etc/ssl/certs/51391683.0"
	I0108 20:22:59.308045  666261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6387322.pem && ln -fs /usr/share/ca-certificates/6387322.pem /etc/ssl/certs/6387322.pem"
	I0108 20:22:59.319768  666261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6387322.pem
	I0108 20:22:59.324331  666261 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:18 /usr/share/ca-certificates/6387322.pem
	I0108 20:22:59.324438  666261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6387322.pem
	I0108 20:22:59.332771  666261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6387322.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:22:59.344152  666261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:22:59.355689  666261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:22:59.360328  666261 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:22:59.360398  666261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:22:59.368939  666261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:22:59.380786  666261 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:22:59.385258  666261 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:22:59.385351  666261 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-105176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-105176 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:22:59.385440  666261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:22:59.385504  666261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:22:59.429152  666261 cri.go:89] found id: ""
	I0108 20:22:59.429217  666261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:22:59.439788  666261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:22:59.450270  666261 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 20:22:59.450357  666261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:22:59.461070  666261 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:22:59.461117  666261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 20:22:59.515034  666261 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 20:22:59.515266  666261 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:22:59.571958  666261 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:22:59.572049  666261 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0108 20:22:59.572105  666261 kubeadm.go:322] OS: Linux
	I0108 20:22:59.572167  666261 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 20:22:59.572234  666261 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 20:22:59.572297  666261 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 20:22:59.572362  666261 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 20:22:59.572442  666261 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 20:22:59.572522  666261 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 20:22:59.658700  666261 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:22:59.658876  666261 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:22:59.659011  666261 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:22:59.886957  666261 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:22:59.888476  666261 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:22:59.888687  666261 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:22:59.990803  666261 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:22:59.995014  666261 out.go:204]   - Generating certificates and keys ...
	I0108 20:22:59.995190  666261 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:22:59.995276  666261 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:23:00.434608  666261 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:23:00.938588  666261 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:23:02.163930  666261 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:23:02.707201  666261 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:23:03.546348  666261 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:23:03.546652  666261 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-105176 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:23:03.987080  666261 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:23:03.987233  666261 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-105176 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:23:04.583247  666261 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:23:05.235569  666261 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:23:05.479618  666261 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:23:05.480296  666261 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:23:05.863771  666261 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:23:06.627155  666261 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:23:06.821549  666261 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:23:07.460339  666261 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:23:07.461257  666261 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:23:07.464106  666261 out.go:204]   - Booting up control plane ...
	I0108 20:23:07.464222  666261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:23:07.471833  666261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:23:07.484269  666261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:23:07.485397  666261 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:23:07.488210  666261 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:23:19.990644  666261 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502366 seconds
	I0108 20:23:19.990762  666261 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:23:20.007485  666261 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:23:20.527900  666261 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:23:20.528055  666261 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-105176 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 20:23:21.037397  666261 kubeadm.go:322] [bootstrap-token] Using token: 3tgzlb.uhv30n7q29zxcw4l
	I0108 20:23:21.040149  666261 out.go:204]   - Configuring RBAC rules ...
	I0108 20:23:21.040272  666261 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:23:21.044679  666261 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:23:21.051954  666261 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:23:21.055131  666261 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:23:21.057558  666261 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:23:21.060304  666261 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:23:21.074127  666261 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:23:21.359054  666261 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:23:21.460040  666261 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:23:21.460065  666261 kubeadm.go:322] 
	I0108 20:23:21.460124  666261 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:23:21.460134  666261 kubeadm.go:322] 
	I0108 20:23:21.460214  666261 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:23:21.460223  666261 kubeadm.go:322] 
	I0108 20:23:21.460248  666261 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:23:21.460306  666261 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:23:21.460361  666261 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:23:21.460369  666261 kubeadm.go:322] 
	I0108 20:23:21.460418  666261 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:23:21.460491  666261 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:23:21.460559  666261 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:23:21.460567  666261 kubeadm.go:322] 
	I0108 20:23:21.460646  666261 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:23:21.460720  666261 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:23:21.460728  666261 kubeadm.go:322] 
	I0108 20:23:21.460806  666261 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3tgzlb.uhv30n7q29zxcw4l \
	I0108 20:23:21.460909  666261 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a \
	I0108 20:23:21.460934  666261 kubeadm.go:322]     --control-plane 
	I0108 20:23:21.460942  666261 kubeadm.go:322] 
	I0108 20:23:21.461021  666261 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:23:21.461037  666261 kubeadm.go:322] 
	I0108 20:23:21.461115  666261 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3tgzlb.uhv30n7q29zxcw4l \
	I0108 20:23:21.461216  666261 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a 
	I0108 20:23:21.465068  666261 kubeadm.go:322] W0108 20:22:59.514468    1225 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 20:23:21.465283  666261 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 20:23:21.465388  666261 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:23:21.465511  666261 kubeadm.go:322] W0108 20:23:07.482531    1225 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 20:23:21.465637  666261 kubeadm.go:322] W0108 20:23:07.484083    1225 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 20:23:21.465655  666261 cni.go:84] Creating CNI manager for ""
	I0108 20:23:21.465664  666261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:23:21.469619  666261 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:23:21.471983  666261 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:23:21.476906  666261 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0108 20:23:21.476929  666261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:23:21.499501  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:23:21.980312  666261 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:23:21.980449  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:21.980527  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=ingress-addon-legacy-105176 minikube.k8s.io/updated_at=2024_01_08T20_23_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:22.124456  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:22.124516  666261 ops.go:34] apiserver oom_adj: -16
	I0108 20:23:22.625427  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:23.124576  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:23.625409  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:24.125292  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:24.624591  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:25.125158  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:25.625531  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:26.124587  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:26.624560  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:27.124715  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:27.624689  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:28.124835  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:28.624878  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:29.125061  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:29.625184  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:30.124911  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:30.624781  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:31.124666  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:31.625209  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:32.125025  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:32.624662  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:33.124938  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:33.625370  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:34.124588  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:34.624773  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:35.125132  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:35.624932  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:36.124536  666261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:23:36.234931  666261 kubeadm.go:1088] duration metric: took 14.254527354s to wait for elevateKubeSystemPrivileges.
	I0108 20:23:36.234964  666261 kubeadm.go:406] StartCluster complete in 36.849616149s
	I0108 20:23:36.234981  666261 settings.go:142] acquiring lock: {Name:mk63cb8f057d0d432df7260ff815cc6f0354f468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:23:36.235038  666261 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:23:36.235800  666261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/kubeconfig: {Name:mk2f931b682c68dbcf44ed887f090aab8cb1a7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:23:36.236506  666261 kapi.go:59] client config for ingress-addon-legacy-105176: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.key", CAFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:23:36.237596  666261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:23:36.237876  666261 config.go:182] Loaded profile config "ingress-addon-legacy-105176": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 20:23:36.237916  666261 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:23:36.237982  666261 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-105176"
	I0108 20:23:36.237995  666261 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-105176"
	I0108 20:23:36.238047  666261 host.go:66] Checking if "ingress-addon-legacy-105176" exists ...
	I0108 20:23:36.238982  666261 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 20:23:36.239027  666261 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-105176"
	I0108 20:23:36.239045  666261 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-105176"
	I0108 20:23:36.239385  666261 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-105176 --format={{.State.Status}}
	I0108 20:23:36.239618  666261 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-105176 --format={{.State.Status}}
	I0108 20:23:36.295110  666261 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:23:36.292686  666261 kapi.go:59] client config for ingress-addon-legacy-105176: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.key", CAFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:23:36.297797  666261 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:23:36.297814  666261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:23:36.297873  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:23:36.297915  666261 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-105176"
	I0108 20:23:36.297945  666261 host.go:66] Checking if "ingress-addon-legacy-105176" exists ...
	I0108 20:23:36.298404  666261 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-105176 --format={{.State.Status}}
	I0108 20:23:36.334615  666261 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:23:36.334637  666261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:23:36.334694  666261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-105176
	I0108 20:23:36.337686  666261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa Username:docker}
	I0108 20:23:36.362156  666261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/ingress-addon-legacy-105176/id_rsa Username:docker}
	I0108 20:23:36.411768  666261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:23:36.570864  666261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:23:36.583885  666261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:23:36.815876  666261 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-105176" context rescaled to 1 replicas
	I0108 20:23:36.815925  666261 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:23:36.818958  666261 out.go:177] * Verifying Kubernetes components...
	I0108 20:23:36.821306  666261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:23:36.885643  666261 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 20:23:37.040210  666261 kapi.go:59] client config for ingress-addon-legacy-105176: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.key", CAFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:23:37.040473  666261 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-105176" to be "Ready" ...
	I0108 20:23:37.059311  666261 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 20:23:37.061727  666261 addons.go:508] enable addons completed in 823.801653ms: enabled=[storage-provisioner default-storageclass]
	I0108 20:23:39.045509  666261 node_ready.go:58] node "ingress-addon-legacy-105176" has status "Ready":"False"
	I0108 20:23:41.543787  666261 node_ready.go:58] node "ingress-addon-legacy-105176" has status "Ready":"False"
	I0108 20:23:44.043629  666261 node_ready.go:58] node "ingress-addon-legacy-105176" has status "Ready":"False"
	I0108 20:23:45.044656  666261 node_ready.go:49] node "ingress-addon-legacy-105176" has status "Ready":"True"
	I0108 20:23:45.044691  666261 node_ready.go:38] duration metric: took 8.004191749s waiting for node "ingress-addon-legacy-105176" to be "Ready" ...
	I0108 20:23:45.044702  666261 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:23:45.053125  666261 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-42vqr" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:47.055923  666261 pod_ready.go:102] pod "coredns-66bff467f8-42vqr" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-08 20:23:36 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0108 20:23:49.564237  666261 pod_ready.go:102] pod "coredns-66bff467f8-42vqr" in "kube-system" namespace has status "Ready":"False"
	I0108 20:23:52.058991  666261 pod_ready.go:102] pod "coredns-66bff467f8-42vqr" in "kube-system" namespace has status "Ready":"False"
	I0108 20:23:54.558396  666261 pod_ready.go:102] pod "coredns-66bff467f8-42vqr" in "kube-system" namespace has status "Ready":"False"
	I0108 20:23:56.558809  666261 pod_ready.go:102] pod "coredns-66bff467f8-42vqr" in "kube-system" namespace has status "Ready":"False"
	I0108 20:23:57.559308  666261 pod_ready.go:92] pod "coredns-66bff467f8-42vqr" in "kube-system" namespace has status "Ready":"True"
	I0108 20:23:57.559337  666261 pod_ready.go:81] duration metric: took 12.506175419s waiting for pod "coredns-66bff467f8-42vqr" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.559349  666261 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-105176" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.564081  666261 pod_ready.go:92] pod "etcd-ingress-addon-legacy-105176" in "kube-system" namespace has status "Ready":"True"
	I0108 20:23:57.564111  666261 pod_ready.go:81] duration metric: took 4.7509ms waiting for pod "etcd-ingress-addon-legacy-105176" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.564127  666261 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-105176" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.568555  666261 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-105176" in "kube-system" namespace has status "Ready":"True"
	I0108 20:23:57.568582  666261 pod_ready.go:81] duration metric: took 4.446938ms waiting for pod "kube-apiserver-ingress-addon-legacy-105176" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.568594  666261 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-105176" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.572841  666261 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-105176" in "kube-system" namespace has status "Ready":"True"
	I0108 20:23:57.572867  666261 pod_ready.go:81] duration metric: took 4.264241ms waiting for pod "kube-controller-manager-ingress-addon-legacy-105176" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.572878  666261 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nnxpz" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.577244  666261 pod_ready.go:92] pod "kube-proxy-nnxpz" in "kube-system" namespace has status "Ready":"True"
	I0108 20:23:57.577272  666261 pod_ready.go:81] duration metric: took 4.3864ms waiting for pod "kube-proxy-nnxpz" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.577287  666261 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-105176" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.754803  666261 request.go:629] Waited for 177.420294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-105176
	I0108 20:23:57.954790  666261 request.go:629] Waited for 197.31972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-105176
	I0108 20:23:57.957621  666261 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-105176" in "kube-system" namespace has status "Ready":"True"
	I0108 20:23:57.957648  666261 pod_ready.go:81] duration metric: took 380.35177ms waiting for pod "kube-scheduler-ingress-addon-legacy-105176" in "kube-system" namespace to be "Ready" ...
	I0108 20:23:57.957661  666261 pod_ready.go:38] duration metric: took 12.912943027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:23:57.957680  666261 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:23:57.957746  666261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:23:57.970924  666261 api_server.go:72] duration metric: took 21.154961471s to wait for apiserver process to appear ...
	I0108 20:23:57.970950  666261 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:23:57.970981  666261 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 20:23:57.979693  666261 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 20:23:57.980603  666261 api_server.go:141] control plane version: v1.18.20
	I0108 20:23:57.980626  666261 api_server.go:131] duration metric: took 9.669456ms to wait for apiserver health ...
	I0108 20:23:57.980637  666261 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:23:58.154915  666261 request.go:629] Waited for 174.182884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:23:58.173222  666261 system_pods.go:59] 8 kube-system pods found
	I0108 20:23:58.173260  666261 system_pods.go:61] "coredns-66bff467f8-42vqr" [eac2e2da-bcc5-4645-b48f-9a0fc78f90d7] Running
	I0108 20:23:58.173268  666261 system_pods.go:61] "etcd-ingress-addon-legacy-105176" [a8971b1a-a434-44a2-882f-55588d084f37] Running
	I0108 20:23:58.173273  666261 system_pods.go:61] "kindnet-h92r6" [436430f4-b1e0-4028-b5fe-1b5771852d61] Running
	I0108 20:23:58.173279  666261 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-105176" [0c0322a5-5c88-464b-9146-ed42b8ba8f26] Running
	I0108 20:23:58.173285  666261 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-105176" [0454a8ef-092e-49cd-9da8-344a6022a80b] Running
	I0108 20:23:58.173290  666261 system_pods.go:61] "kube-proxy-nnxpz" [d0d7300b-cd5b-4b16-b669-ec4c1e093708] Running
	I0108 20:23:58.173295  666261 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-105176" [95b4efe8-0b69-49b5-b053-df541765863d] Running
	I0108 20:23:58.173305  666261 system_pods.go:61] "storage-provisioner" [5d107a90-3e24-4e07-88da-b75ff6017829] Running
	I0108 20:23:58.173311  666261 system_pods.go:74] duration metric: took 192.6507ms to wait for pod list to return data ...
	I0108 20:23:58.173322  666261 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:23:58.354823  666261 request.go:629] Waited for 181.430109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:23:58.357833  666261 default_sa.go:45] found service account: "default"
	I0108 20:23:58.357863  666261 default_sa.go:55] duration metric: took 184.534382ms for default service account to be created ...
	I0108 20:23:58.357878  666261 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:23:58.554965  666261 request.go:629] Waited for 196.985711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:23:58.560839  666261 system_pods.go:86] 8 kube-system pods found
	I0108 20:23:58.560871  666261 system_pods.go:89] "coredns-66bff467f8-42vqr" [eac2e2da-bcc5-4645-b48f-9a0fc78f90d7] Running
	I0108 20:23:58.560879  666261 system_pods.go:89] "etcd-ingress-addon-legacy-105176" [a8971b1a-a434-44a2-882f-55588d084f37] Running
	I0108 20:23:58.560884  666261 system_pods.go:89] "kindnet-h92r6" [436430f4-b1e0-4028-b5fe-1b5771852d61] Running
	I0108 20:23:58.560889  666261 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-105176" [0c0322a5-5c88-464b-9146-ed42b8ba8f26] Running
	I0108 20:23:58.560894  666261 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-105176" [0454a8ef-092e-49cd-9da8-344a6022a80b] Running
	I0108 20:23:58.560899  666261 system_pods.go:89] "kube-proxy-nnxpz" [d0d7300b-cd5b-4b16-b669-ec4c1e093708] Running
	I0108 20:23:58.560907  666261 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-105176" [95b4efe8-0b69-49b5-b053-df541765863d] Running
	I0108 20:23:58.560918  666261 system_pods.go:89] "storage-provisioner" [5d107a90-3e24-4e07-88da-b75ff6017829] Running
	I0108 20:23:58.560925  666261 system_pods.go:126] duration metric: took 203.008228ms to wait for k8s-apps to be running ...
	I0108 20:23:58.560936  666261 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:23:58.560994  666261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:23:58.574210  666261 system_svc.go:56] duration metric: took 13.263817ms WaitForService to wait for kubelet.
	I0108 20:23:58.574238  666261 kubeadm.go:581] duration metric: took 21.758282351s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:23:58.574257  666261 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:23:58.754636  666261 request.go:629] Waited for 180.293796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0108 20:23:58.757523  666261 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:23:58.757556  666261 node_conditions.go:123] node cpu capacity is 2
	I0108 20:23:58.757570  666261 node_conditions.go:105] duration metric: took 183.306786ms to run NodePressure ...
	I0108 20:23:58.757583  666261 start.go:228] waiting for startup goroutines ...
	I0108 20:23:58.757591  666261 start.go:233] waiting for cluster config update ...
	I0108 20:23:58.757600  666261 start.go:242] writing updated cluster config ...
	I0108 20:23:58.757887  666261 ssh_runner.go:195] Run: rm -f paused
	I0108 20:23:58.819022  666261 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 20:23:58.821435  666261 out.go:177] 
	W0108 20:23:58.823267  666261 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 20:23:58.824998  666261 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 20:23:58.827053  666261 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-105176" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 20:27:35 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:35.794496036Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=7232b5fe-c45a-4a84-8da3-827fdbe04ed4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 20:27:35 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:35.794679989Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7232b5fe-c45a-4a84-8da3-827fdbe04ed4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 20:27:35 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:35.795516255Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-7hkvl/hello-world-app" id=6dd3a183-b699-435d-891f-429343b27a9b name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 08 20:27:35 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:35.795605724Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 20:27:35 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:35.867839724Z" level=info msg="Created container ac545635d4014e0cada0ac1ed546ca04e4df0e8b6066dfee03dc7b9bbc0fa8be: default/hello-world-app-5f5d8b66bb-7hkvl/hello-world-app" id=6dd3a183-b699-435d-891f-429343b27a9b name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 08 20:27:35 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:35.868903257Z" level=info msg="Starting container: ac545635d4014e0cada0ac1ed546ca04e4df0e8b6066dfee03dc7b9bbc0fa8be" id=1ea0df3b-f632-4284-8c89-cb90e4b5763a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jan 08 20:27:35 ingress-addon-legacy-105176 conmon[3679]: conmon ac545635d4014e0cada0 <ninfo>: container 3690 exited with status 1
	Jan 08 20:27:35 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:35.882813385Z" level=info msg="Started container" PID=3690 containerID=ac545635d4014e0cada0ac1ed546ca04e4df0e8b6066dfee03dc7b9bbc0fa8be description=default/hello-world-app-5f5d8b66bb-7hkvl/hello-world-app id=1ea0df3b-f632-4284-8c89-cb90e4b5763a name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=cd2ddeaf98da74ba27c9f7027a20c160975ab2ee09ca022598d103fae8e6425a
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.412354181Z" level=warning msg="Stopping container 553d53b8031e9361f32f0560229c0dd3feaa63cac8381334d26efbfbc11dff15 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=a7e51df7-2189-472d-b373-30b3a4433b23 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.471201660Z" level=info msg="Removing container: 63057a4213935ac50a001c30e221f5381509a5e99cd88a021b89d29a9335f1b6" id=eec6225d-8f1a-40c2-86cb-7c438d73b595 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 08 20:27:36 ingress-addon-legacy-105176 conmon[2705]: conmon 553d53b8031e9361f32f <ninfo>: container 2716 exited with status 137
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.497943103Z" level=info msg="Removed container 63057a4213935ac50a001c30e221f5381509a5e99cd88a021b89d29a9335f1b6: default/hello-world-app-5f5d8b66bb-7hkvl/hello-world-app" id=eec6225d-8f1a-40c2-86cb-7c438d73b595 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.573587160Z" level=info msg="Stopped container 553d53b8031e9361f32f0560229c0dd3feaa63cac8381334d26efbfbc11dff15: ingress-nginx/ingress-nginx-controller-7fcf777cb7-9fwq9/controller" id=a7e51df7-2189-472d-b373-30b3a4433b23 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.574089728Z" level=info msg="Stopped container 553d53b8031e9361f32f0560229c0dd3feaa63cac8381334d26efbfbc11dff15: ingress-nginx/ingress-nginx-controller-7fcf777cb7-9fwq9/controller" id=4c460165-e57c-487b-be1f-7ee1b79db4c4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.574163485Z" level=info msg="Stopping pod sandbox: 6e760b3a29a81b509077fef53dc4d9ae55ef0bc82c35be1fa0af424c801469ef" id=088ee5b0-105a-47df-86e8-c22679d02317 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.574755662Z" level=info msg="Stopping pod sandbox: 6e760b3a29a81b509077fef53dc4d9ae55ef0bc82c35be1fa0af424c801469ef" id=94e21e07-686c-4a6d-8d2b-d86140acb278 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.577725378Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-KZEXLBREPNQ2LROG - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-3ODBFJM7KJIYAFVY - [0:0]\n-X KUBE-HP-3ODBFJM7KJIYAFVY\n-X KUBE-HP-KZEXLBREPNQ2LROG\nCOMMIT\n"
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.579313216Z" level=info msg="Closing host port tcp:80"
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.579361274Z" level=info msg="Closing host port tcp:443"
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.580650959Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.580683074Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.580833007Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-9fwq9 Namespace:ingress-nginx ID:6e760b3a29a81b509077fef53dc4d9ae55ef0bc82c35be1fa0af424c801469ef UID:1b84907b-c2a4-4d82-8085-cbe7502df342 NetNS:/var/run/netns/58b9da28-9965-4e3b-97a5-81af613eb27c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.580965415Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-9fwq9 from CNI network \"kindnet\" (type=ptp)"
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.610203298Z" level=info msg="Stopped pod sandbox: 6e760b3a29a81b509077fef53dc4d9ae55ef0bc82c35be1fa0af424c801469ef" id=088ee5b0-105a-47df-86e8-c22679d02317 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:27:36 ingress-addon-legacy-105176 crio[895]: time="2024-01-08 20:27:36.610328157Z" level=info msg="Stopped pod sandbox (already stopped): 6e760b3a29a81b509077fef53dc4d9ae55ef0bc82c35be1fa0af424c801469ef" id=94e21e07-686c-4a6d-8d2b-d86140acb278 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac545635d4014       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   6 seconds ago       Exited              hello-world-app           2                   cd2ddeaf98da7       hello-world-app-5f5d8b66bb-7hkvl
	6345c3e4b78c0       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                    2 minutes ago       Running             nginx                     0                   e2a6d10bace79       nginx
	553d53b8031e9       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   6e760b3a29a81       ingress-nginx-controller-7fcf777cb7-9fwq9
	1c09ed4a1d0d7       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   6d693d521585a       ingress-nginx-admission-patch-5d2sv
	823fdc67397e3       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   59f422fc62201       ingress-nginx-admission-create-57st7
	4d73d3dd9ae3c       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   73ae6c04e009e       coredns-66bff467f8-42vqr
	30583351a99d8       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   0cf46f24caba0       storage-provisioner
	915887ffc1df6       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 4 minutes ago       Running             kindnet-cni               0                   790755473f665       kindnet-h92r6
	7f3457151bf98       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   4 minutes ago       Running             kube-proxy                0                   61e10994a0f71       kube-proxy-nnxpz
	495279267b02b       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   d0cd097255cfa       kube-controller-manager-ingress-addon-legacy-105176
	9827aa38b1648       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   c00dbb2d2c0e7       etcd-ingress-addon-legacy-105176
	ddbccea94fa38       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   f0eaa59d3329a       kube-apiserver-ingress-addon-legacy-105176
	347387fea614f       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   6e201587d2bb5       kube-scheduler-ingress-addon-legacy-105176
	
	
	==> coredns [4d73d3dd9ae3c3d7131547662354936b075b4c93f6c6e059b22c381668cd00a0] <==
	[INFO] 10.244.0.5:41537 - 47520 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042593s
	[INFO] 10.244.0.5:41537 - 25532 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055246s
	[INFO] 10.244.0.5:41537 - 42713 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041814s
	[INFO] 10.244.0.5:41537 - 17505 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032895s
	[INFO] 10.244.0.5:41537 - 7117 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001279313s
	[INFO] 10.244.0.5:41537 - 20637 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003873535s
	[INFO] 10.244.0.5:41537 - 61000 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073461s
	[INFO] 10.244.0.5:44501 - 34589 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073026s
	[INFO] 10.244.0.5:45162 - 10300 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028824s
	[INFO] 10.244.0.5:44501 - 4958 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040197s
	[INFO] 10.244.0.5:44501 - 37064 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033559s
	[INFO] 10.244.0.5:45162 - 9616 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000026815s
	[INFO] 10.244.0.5:45162 - 4632 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032222s
	[INFO] 10.244.0.5:44501 - 10329 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002336s
	[INFO] 10.244.0.5:44501 - 51880 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000089183s
	[INFO] 10.244.0.5:45162 - 62978 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039828s
	[INFO] 10.244.0.5:44501 - 13348 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002345s
	[INFO] 10.244.0.5:44501 - 32741 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001147128s
	[INFO] 10.244.0.5:45162 - 34853 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031639s
	[INFO] 10.244.0.5:45162 - 30532 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003356s
	[INFO] 10.244.0.5:45162 - 36026 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001502758s
	[INFO] 10.244.0.5:44501 - 10567 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002218497s
	[INFO] 10.244.0.5:44501 - 26255 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036513s
	[INFO] 10.244.0.5:45162 - 64771 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00103306s
	[INFO] 10.244.0.5:45162 - 39113 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041502s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-105176
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-105176
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=ingress-addon-legacy-105176
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_23_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:23:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-105176
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:27:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:27:25 +0000   Mon, 08 Jan 2024 20:23:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:27:25 +0000   Mon, 08 Jan 2024 20:23:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:27:25 +0000   Mon, 08 Jan 2024 20:23:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:27:25 +0000   Mon, 08 Jan 2024 20:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-105176
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 094d3d9b624648ee920576587f92c62b
	  System UUID:                f6ede4c2-3184-4a67-bcc7-ab40ed436e94
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7hkvl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m16s
	  kube-system                 coredns-66bff467f8-42vqr                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m6s
	  kube-system                 etcd-ingress-addon-legacy-105176                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kindnet-h92r6                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m6s
	  kube-system                 kube-apiserver-ingress-addon-legacy-105176             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-105176    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-nnxpz                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-ingress-addon-legacy-105176             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m32s (x5 over 4m32s)  kubelet     Node ingress-addon-legacy-105176 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x5 over 4m32s)  kubelet     Node ingress-addon-legacy-105176 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x5 over 4m32s)  kubelet     Node ingress-addon-legacy-105176 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet     Node ingress-addon-legacy-105176 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet     Node ingress-addon-legacy-105176 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet     Node ingress-addon-legacy-105176 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m3s                   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m58s                  kubelet     Node ingress-addon-legacy-105176 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001079] FS-Cache: O-key=[8] 'a070ed0000000000'
	[  +0.000709] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000b771cf9b
	[  +0.001160] FS-Cache: N-key=[8] 'a070ed0000000000'
	[  +0.005206] FS-Cache: Duplicate cookie detected
	[  +0.000737] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000938] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=000000007ac10e0d
	[  +0.001142] FS-Cache: O-key=[8] 'a070ed0000000000'
	[  +0.000710] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000958] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000372890f4
	[  +0.001099] FS-Cache: N-key=[8] 'a070ed0000000000'
	[  +2.042043] FS-Cache: Duplicate cookie detected
	[  +0.000813] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001067] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000126b129b
	[  +0.001188] FS-Cache: O-key=[8] '9f70ed0000000000'
	[  +0.000740] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000b771cf9b
	[  +0.001256] FS-Cache: N-key=[8] '9f70ed0000000000'
	[  +0.329505] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001108] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=0000000009be8c6c
	[  +0.001149] FS-Cache: O-key=[8] 'a570ed0000000000'
	[  +0.000824] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=000000006cd4597f
	[  +0.001205] FS-Cache: N-key=[8] 'a570ed0000000000'
	
	
	==> etcd [9827aa38b164829031da8d81458cae6578ad630146b5efdb342d9bebaf270274] <==
	raft2024/01/08 20:23:13 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/08 20:23:13 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/08 20:23:13 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/08 20:23:13 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 20:23:13.399387 W | auth: simple token is not cryptographically signed
	2024-01-08 20:23:13.683327 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-08 20:23:13.705118 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 20:23:13.705466 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-08 20:23:13.705639 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-08 20:23:13.705789 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/08 20:23:13 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 20:23:13.706359 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2024/01/08 20:23:13 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/08 20:23:13 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/08 20:23:13 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/08 20:23:13 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/08 20:23:13 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-08 20:23:13.962936 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 20:23:13.966667 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 20:23:13.982516 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-08 20:23:13.982607 I | etcdserver: published {Name:ingress-addon-legacy-105176 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-08 20:23:13.988617 I | embed: ready to serve client requests
	2024-01-08 20:23:13.994723 I | embed: ready to serve client requests
	2024-01-08 20:23:14.201749 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-08 20:23:14.258591 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 20:27:42 up  3:10,  0 users,  load average: 0.18, 0.96, 1.24
	Linux ingress-addon-legacy-105176 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [915887ffc1df6a3228683cc168681d150a48f9ef81be1b36da078e7c7955026f] <==
	I0108 20:25:40.022860       1 main.go:227] handling current node
	I0108 20:25:50.026963       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:25:50.026996       1 main.go:227] handling current node
	I0108 20:26:00.032877       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:26:00.033030       1 main.go:227] handling current node
	I0108 20:26:10.041697       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:26:10.041727       1 main.go:227] handling current node
	I0108 20:26:20.045173       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:26:20.045202       1 main.go:227] handling current node
	I0108 20:26:30.055947       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:26:30.055976       1 main.go:227] handling current node
	I0108 20:26:40.059712       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:26:40.059740       1 main.go:227] handling current node
	I0108 20:26:50.071241       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:26:50.071276       1 main.go:227] handling current node
	I0108 20:27:00.082561       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:27:00.082596       1 main.go:227] handling current node
	I0108 20:27:10.090736       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:27:10.090766       1 main.go:227] handling current node
	I0108 20:27:20.102422       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:27:20.102557       1 main.go:227] handling current node
	I0108 20:27:30.109435       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:27:30.109549       1 main.go:227] handling current node
	I0108 20:27:40.112863       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:27:40.112896       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ddbccea94fa38c5b9923caaeb89f27a5c6fcea187e680bf13c5a895cf0ecb440] <==
	E0108 20:23:18.103658       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0108 20:23:18.259503       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0108 20:23:18.259551       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 20:23:18.259565       1 cache.go:39] Caches are synced for autoregister controller
	I0108 20:23:18.259873       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 20:23:18.268880       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 20:23:19.056591       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 20:23:19.056623       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 20:23:19.064079       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 20:23:19.067874       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 20:23:19.067952       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 20:23:19.491904       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:23:19.542941       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 20:23:19.693623       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0108 20:23:19.694698       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 20:23:19.698519       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 20:23:20.513228       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0108 20:23:21.332522       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 20:23:21.444416       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 20:23:24.749717       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:23:36.517706       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 20:23:36.543375       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 20:23:59.748977       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 20:24:26.706621       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0108 20:27:33.482924       1 watch.go:251] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*http2.responseWriter)(0x400f0403f0), encoder:(*versioning.codec)(0x4008153680), buf:(*bytes.Buffer)(0x400781ac00)})
	
	
	==> kube-controller-manager [495279267b02b32912fc3a735d8cd8b26541a554f46d749b9a575a24dfd3dc47] <==
	I0108 20:23:36.519210       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0108 20:23:36.519249       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 20:23:36.519298       1 shared_informer.go:230] Caches are synced for stateful set 
	I0108 20:23:36.530149       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-105176", UID:"5decca87-facb-49cd-b78d-be4528687d64", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-105176 event: Registered Node ingress-addon-legacy-105176 in Controller
	I0108 20:23:36.530836       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0108 20:23:36.530931       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-105176. Assuming now as a timestamp.
	I0108 20:23:36.530984       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0108 20:23:36.575496       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 20:23:36.575534       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 20:23:36.647158       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"fac82837-ed94-4a8a-82d7-feda24669459", APIVersion:"apps/v1", ResourceVersion:"330", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0108 20:23:36.732328       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"991a67c2-7762-467b-b9f6-82338967b674", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-nnxpz
	I0108 20:23:36.817203       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"837818af-4496-49db-84c7-8d133bc9cda4", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-42vqr
	I0108 20:23:36.818148       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"104bbba4-6779-45c6-9354-2afa44370f69", APIVersion:"apps/v1", ResourceVersion:"235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-h92r6
	E0108 20:23:36.928769       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"104bbba4-6779-45c6-9354-2afa44370f69", ResourceVersion:"235", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63840342201, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40009f4d40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40009f4d60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40009f4d80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40009f4da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40009f4dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40009f4de0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40009f4e00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40009f4e40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000177180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000ad10e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40009247e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000fc8700)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000ad11e0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0108 20:23:46.531381       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0108 20:23:59.722695       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9893620b-9424-4226-93e5-be5fe682e060", APIVersion:"apps/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 20:23:59.752707       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"d0d41489-eafe-48f7-adcb-13b8d6498e03", APIVersion:"apps/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-9fwq9
	I0108 20:23:59.771984       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5a8dadc7-e8c4-4cf7-80c3-818ebbeb44c4", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-57st7
	I0108 20:23:59.816535       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"99c477d5-36dd-4f36-8c55-2eddcfae4fa3", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-5d2sv
	I0108 20:24:02.011627       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5a8dadc7-e8c4-4cf7-80c3-818ebbeb44c4", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 20:24:03.004013       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"99c477d5-36dd-4f36-8c55-2eddcfae4fa3", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 20:27:15.951837       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"9626066b-b3d1-44e9-b2c2-1451dfa13064", APIVersion:"apps/v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 20:27:15.986526       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"53c598f4-7245-400f-a9a5-c29a9e9fa5b3", APIVersion:"apps/v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7hkvl
	E0108 20:27:39.198574       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-t8mbw" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [7f3457151bf983b8527cb6e4bf958573d6cef58ed5e53c010adc4aebbbc65293] <==
	W0108 20:23:39.390559       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 20:23:39.401691       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0108 20:23:39.401739       1 server_others.go:186] Using iptables Proxier.
	I0108 20:23:39.402017       1 server.go:583] Version: v1.18.20
	I0108 20:23:39.403032       1 config.go:315] Starting service config controller
	I0108 20:23:39.403118       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 20:23:39.403242       1 config.go:133] Starting endpoints config controller
	I0108 20:23:39.403281       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 20:23:39.503344       1 shared_informer.go:230] Caches are synced for service config 
	I0108 20:23:39.503494       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [347387fea614f3119d5637957328be904d2db1b697bd99c2f8382a14996eaa0b] <==
	W0108 20:23:18.223500       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 20:23:18.255433       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 20:23:18.255461       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 20:23:18.262041       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0108 20:23:18.262167       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:23:18.262182       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:23:18.262204       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0108 20:23:18.279569       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:23:18.279749       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:23:18.279864       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:23:18.279966       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:23:18.280064       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:23:18.280166       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:23:18.280282       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:23:18.280374       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:23:18.280462       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:23:18.283593       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:23:18.283789       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:23:18.285681       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:23:19.114592       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:23:19.199570       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:23:19.406486       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 20:23:21.162339       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0108 20:23:36.919483       1 factory.go:503] pod: kube-system/coredns-66bff467f8-42vqr is already present in the active queue
	E0108 20:23:37.060422       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	
	==> kubelet <==
	Jan 08 20:27:20 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:20.444442    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9dbe0c54081c881c01809c3f572f866e6d70ffe5f076bd9a50bf11a253252a9f
	Jan 08 20:27:20 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:20.444548    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 63057a4213935ac50a001c30e221f5381509a5e99cd88a021b89d29a9335f1b6
	Jan 08 20:27:20 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:20.444784    1635 pod_workers.go:191] Error syncing pod 0da41536-161f-443d-ad62-9532d4cb7b0f ("hello-world-app-5f5d8b66bb-7hkvl_default(0da41536-161f-443d-ad62-9532d4cb7b0f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7hkvl_default(0da41536-161f-443d-ad62-9532d4cb7b0f)"
	Jan 08 20:27:21 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:21.447138    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 63057a4213935ac50a001c30e221f5381509a5e99cd88a021b89d29a9335f1b6
	Jan 08 20:27:21 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:21.447375    1635 pod_workers.go:191] Error syncing pod 0da41536-161f-443d-ad62-9532d4cb7b0f ("hello-world-app-5f5d8b66bb-7hkvl_default(0da41536-161f-443d-ad62-9532d4cb7b0f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7hkvl_default(0da41536-161f-443d-ad62-9532d4cb7b0f)"
	Jan 08 20:27:21 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:21.793946    1635 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:27:21 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:21.793986    1635 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:27:21 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:21.794035    1635 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:27:21 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:21.794073    1635 pod_workers.go:191] Error syncing pod d9199f73-01f6-4fe3-9d58-5adc74f792d1 ("kube-ingress-dns-minikube_kube-system(d9199f73-01f6-4fe3-9d58-5adc74f792d1)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 20:27:31 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:31.969454    1635 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-9vsbn" (UniqueName: "kubernetes.io/secret/d9199f73-01f6-4fe3-9d58-5adc74f792d1-minikube-ingress-dns-token-9vsbn") pod "d9199f73-01f6-4fe3-9d58-5adc74f792d1" (UID: "d9199f73-01f6-4fe3-9d58-5adc74f792d1")
	Jan 08 20:27:31 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:31.974260    1635 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9199f73-01f6-4fe3-9d58-5adc74f792d1-minikube-ingress-dns-token-9vsbn" (OuterVolumeSpecName: "minikube-ingress-dns-token-9vsbn") pod "d9199f73-01f6-4fe3-9d58-5adc74f792d1" (UID: "d9199f73-01f6-4fe3-9d58-5adc74f792d1"). InnerVolumeSpecName "minikube-ingress-dns-token-9vsbn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:27:32 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:32.069816    1635 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-9vsbn" (UniqueName: "kubernetes.io/secret/d9199f73-01f6-4fe3-9d58-5adc74f792d1-minikube-ingress-dns-token-9vsbn") on node "ingress-addon-legacy-105176" DevicePath ""
	Jan 08 20:27:34 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:34.402817    1635 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9fwq9.17a87923fae2f478", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9fwq9", UID:"1b84907b-c2a4-4d82-8085-cbe7502df342", APIVersion:"v1", ResourceVersion:"465", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-105176"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f348d97d35878, ext:253112978810, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f348d97d35878, ext:253112978810, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9fwq9.17a87923fae2f478" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 20:27:34 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:34.430487    1635 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9fwq9.17a87923fae2f478", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9fwq9", UID:"1b84907b-c2a4-4d82-8085-cbe7502df342", APIVersion:"v1", ResourceVersion:"465", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-105176"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f348d97d35878, ext:253112978810, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f348d995337d6, ext:253138136280, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9fwq9.17a87923fae2f478" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 20:27:35 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:35.793087    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 63057a4213935ac50a001c30e221f5381509a5e99cd88a021b89d29a9335f1b6
	Jan 08 20:27:36 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:36.470017    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 63057a4213935ac50a001c30e221f5381509a5e99cd88a021b89d29a9335f1b6
	Jan 08 20:27:36 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:36.470224    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ac545635d4014e0cada0ac1ed546ca04e4df0e8b6066dfee03dc7b9bbc0fa8be
	Jan 08 20:27:36 ingress-addon-legacy-105176 kubelet[1635]: E0108 20:27:36.470494    1635 pod_workers.go:191] Error syncing pod 0da41536-161f-443d-ad62-9532d4cb7b0f ("hello-world-app-5f5d8b66bb-7hkvl_default(0da41536-161f-443d-ad62-9532d4cb7b0f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7hkvl_default(0da41536-161f-443d-ad62-9532d4cb7b0f)"
	Jan 08 20:27:37 ingress-addon-legacy-105176 kubelet[1635]: W0108 20:27:37.473591    1635 pod_container_deletor.go:77] Container "6e760b3a29a81b509077fef53dc4d9ae55ef0bc82c35be1fa0af424c801469ef" not found in pod's containers
	Jan 08 20:27:38 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:38.584156    1635 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1b84907b-c2a4-4d82-8085-cbe7502df342-webhook-cert") pod "1b84907b-c2a4-4d82-8085-cbe7502df342" (UID: "1b84907b-c2a4-4d82-8085-cbe7502df342")
	Jan 08 20:27:38 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:38.584211    1635 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-bfr99" (UniqueName: "kubernetes.io/secret/1b84907b-c2a4-4d82-8085-cbe7502df342-ingress-nginx-token-bfr99") pod "1b84907b-c2a4-4d82-8085-cbe7502df342" (UID: "1b84907b-c2a4-4d82-8085-cbe7502df342")
	Jan 08 20:27:38 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:38.590268    1635 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b84907b-c2a4-4d82-8085-cbe7502df342-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1b84907b-c2a4-4d82-8085-cbe7502df342" (UID: "1b84907b-c2a4-4d82-8085-cbe7502df342"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:27:38 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:38.590903    1635 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b84907b-c2a4-4d82-8085-cbe7502df342-ingress-nginx-token-bfr99" (OuterVolumeSpecName: "ingress-nginx-token-bfr99") pod "1b84907b-c2a4-4d82-8085-cbe7502df342" (UID: "1b84907b-c2a4-4d82-8085-cbe7502df342"). InnerVolumeSpecName "ingress-nginx-token-bfr99". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:27:38 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:38.684493    1635 reconciler.go:319] Volume detached for volume "ingress-nginx-token-bfr99" (UniqueName: "kubernetes.io/secret/1b84907b-c2a4-4d82-8085-cbe7502df342-ingress-nginx-token-bfr99") on node "ingress-addon-legacy-105176" DevicePath ""
	Jan 08 20:27:38 ingress-addon-legacy-105176 kubelet[1635]: I0108 20:27:38.684542    1635 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1b84907b-c2a4-4d82-8085-cbe7502df342-webhook-cert") on node "ingress-addon-legacy-105176" DevicePath ""
	
	
	==> storage-provisioner [30583351a99d8dc0d062453704910d78d7f8265ea5bfea25fb500b9adc733f40] <==
	I0108 20:23:50.053723       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:23:50.078640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:23:50.084252       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:23:50.095715       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:23:50.096201       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1eadd8ea-e4e1-4086-9d53-f8d0ee150f51", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-105176_d46d7e0e-463d-4864-954e-aa2ab27cf79b became leader
	I0108 20:23:50.098782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-105176_d46d7e0e-463d-4864-954e-aa2ab27cf79b!
	I0108 20:23:50.200061       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-105176_d46d7e0e-463d-4864-954e-aa2ab27cf79b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-105176 -n ingress-addon-legacy-105176
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-105176 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (212.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-lxnll -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-lxnll -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-lxnll -- sh -c "ping -c 1 192.168.58.1": exit status 1 (230.775943ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-lxnll): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-zsk76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-zsk76 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-zsk76 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (225.901405ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-zsk76): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-933566
helpers_test.go:235: (dbg) docker inspect multinode-933566:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb",
	        "Created": "2024-01-08T20:33:49.98952681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702975,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:33:50.345955105Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/hosts",
	        "LogPath": "/var/lib/docker/containers/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb-json.log",
	        "Name": "/multinode-933566",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-933566:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-933566",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/933e2bdcf37c0dd3c15d09262f43e5db3975958c6f350e0500969b9972763a73-init/diff:/var/lib/docker/overlay2/6dc70d5fd4ec367ecfc7dc99fc7bcaf35d9752c3024a41d78b490451f211e3b4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/933e2bdcf37c0dd3c15d09262f43e5db3975958c6f350e0500969b9972763a73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/933e2bdcf37c0dd3c15d09262f43e5db3975958c6f350e0500969b9972763a73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/933e2bdcf37c0dd3c15d09262f43e5db3975958c6f350e0500969b9972763a73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-933566",
	                "Source": "/var/lib/docker/volumes/multinode-933566/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-933566",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-933566",
	                "name.minikube.sigs.k8s.io": "multinode-933566",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3874eb6ef948e6971ec3b33cc4d6e8ae71c6295f85d543ff390226686fc750de",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3874eb6ef948",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-933566": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "21d6edc8691b",
	                        "multinode-933566"
	                    ],
	                    "NetworkID": "3e130247834cdd5475fbf46e1980b2e1b27dd1e5d05b841551b0f439b0f2b84b",
	                    "EndpointID": "f8ab2985ecf30703cc047b0d3160748c20c12faf24ee3eead31b2c11dfba9225",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-933566 -n multinode-933566
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-933566 logs -n 25: (1.500588497s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-349299                           | mount-start-2-349299 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-349299 ssh -- ls                    | mount-start-2-349299 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-347420                           | mount-start-1-347420 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-349299 ssh -- ls                    | mount-start-2-349299 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-349299                           | mount-start-2-349299 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	| start   | -p mount-start-2-349299                           | mount-start-2-349299 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	| ssh     | mount-start-2-349299 ssh -- ls                    | mount-start-2-349299 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-349299                           | mount-start-2-349299 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	| delete  | -p mount-start-1-347420                           | mount-start-1-347420 | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:33 UTC |
	| start   | -p multinode-933566                               | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:33 UTC | 08 Jan 24 20:35 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- apply -f                   | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- rollout                    | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- get pods -o                | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- get pods -o                | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | busybox-5bc68d56bd-lxnll --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | busybox-5bc68d56bd-zsk76 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | busybox-5bc68d56bd-lxnll --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | busybox-5bc68d56bd-zsk76 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | busybox-5bc68d56bd-lxnll -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | busybox-5bc68d56bd-zsk76 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- get pods -o                | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | busybox-5bc68d56bd-lxnll                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC |                     |
	|         | busybox-5bc68d56bd-lxnll -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC | 08 Jan 24 20:35 UTC |
	|         | busybox-5bc68d56bd-zsk76                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-933566 -- exec                       | multinode-933566     | jenkins | v1.32.0 | 08 Jan 24 20:35 UTC |                     |
	|         | busybox-5bc68d56bd-zsk76 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:33:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:33:44.567123  702522 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:33:44.567291  702522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:33:44.567302  702522 out.go:309] Setting ErrFile to fd 2...
	I0108 20:33:44.567308  702522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:33:44.567582  702522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:33:44.568077  702522 out.go:303] Setting JSON to false
	I0108 20:33:44.568925  702522 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11767,"bootTime":1704734258,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:33:44.568998  702522 start.go:138] virtualization:  
	I0108 20:33:44.571907  702522 out.go:177] * [multinode-933566] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:33:44.573840  702522 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:33:44.575862  702522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:33:44.573974  702522 notify.go:220] Checking for updates...
	I0108 20:33:44.579619  702522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:33:44.581780  702522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:33:44.583873  702522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:33:44.586189  702522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:33:44.588361  702522 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:33:44.611909  702522 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:33:44.612042  702522 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:33:44.689562  702522 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 20:33:44.679028213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:33:44.689669  702522 docker.go:295] overlay module found
	I0108 20:33:44.693210  702522 out.go:177] * Using the docker driver based on user configuration
	I0108 20:33:44.695021  702522 start.go:298] selected driver: docker
	I0108 20:33:44.695036  702522 start.go:902] validating driver "docker" against <nil>
	I0108 20:33:44.695049  702522 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:33:44.695677  702522 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:33:44.761024  702522 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 20:33:44.751764071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:33:44.761181  702522 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:33:44.761433  702522 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:33:44.763564  702522 out.go:177] * Using Docker driver with root privileges
	I0108 20:33:44.765571  702522 cni.go:84] Creating CNI manager for ""
	I0108 20:33:44.765596  702522 cni.go:136] 0 nodes found, recommending kindnet
	I0108 20:33:44.765607  702522 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:33:44.765624  702522 start_flags.go:323] config:
	{Name:multinode-933566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-933566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:33:44.767877  702522 out.go:177] * Starting control plane node multinode-933566 in cluster multinode-933566
	I0108 20:33:44.769604  702522 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:33:44.771663  702522 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:33:44.773358  702522 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:33:44.773409  702522 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0108 20:33:44.773426  702522 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:33:44.773431  702522 cache.go:56] Caching tarball of preloaded images
	I0108 20:33:44.773507  702522 preload.go:174] Found /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0108 20:33:44.773516  702522 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:33:44.773861  702522 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/config.json ...
	I0108 20:33:44.773891  702522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/config.json: {Name:mk308e41dc9d6fbde23ce9ee201bacadfedec797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:33:44.790536  702522 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:33:44.790558  702522 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:33:44.790584  702522 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:33:44.790653  702522 start.go:365] acquiring machines lock for multinode-933566: {Name:mk1f4b770712b00c6e72d6449ff5c16de84cb1a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:33:44.790769  702522 start.go:369] acquired machines lock for "multinode-933566" in 99.185µs
	I0108 20:33:44.790794  702522 start.go:93] Provisioning new machine with config: &{Name:multinode-933566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-933566 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:33:44.790868  702522 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:33:44.795182  702522 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 20:33:44.795432  702522 start.go:159] libmachine.API.Create for "multinode-933566" (driver="docker")
	I0108 20:33:44.795466  702522 client.go:168] LocalClient.Create starting
	I0108 20:33:44.795556  702522 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem
	I0108 20:33:44.795601  702522 main.go:141] libmachine: Decoding PEM data...
	I0108 20:33:44.795621  702522 main.go:141] libmachine: Parsing certificate...
	I0108 20:33:44.795678  702522 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem
	I0108 20:33:44.795700  702522 main.go:141] libmachine: Decoding PEM data...
	I0108 20:33:44.795715  702522 main.go:141] libmachine: Parsing certificate...
	I0108 20:33:44.796086  702522 cli_runner.go:164] Run: docker network inspect multinode-933566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:33:44.812543  702522 cli_runner.go:211] docker network inspect multinode-933566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:33:44.812631  702522 network_create.go:281] running [docker network inspect multinode-933566] to gather additional debugging logs...
	I0108 20:33:44.812654  702522 cli_runner.go:164] Run: docker network inspect multinode-933566
	W0108 20:33:44.828792  702522 cli_runner.go:211] docker network inspect multinode-933566 returned with exit code 1
	I0108 20:33:44.828819  702522 network_create.go:284] error running [docker network inspect multinode-933566]: docker network inspect multinode-933566: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-933566 not found
	I0108 20:33:44.828831  702522 network_create.go:286] output of [docker network inspect multinode-933566]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-933566 not found
	
	** /stderr **
	I0108 20:33:44.828936  702522 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:33:44.845796  702522 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c71a8e375fca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e7:14:80:72} reservation:<nil>}
	I0108 20:33:44.846139  702522 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024c5660}
	I0108 20:33:44.846159  702522 network_create.go:124] attempt to create docker network multinode-933566 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0108 20:33:44.846214  702522 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-933566 multinode-933566
	I0108 20:33:44.915022  702522 network_create.go:108] docker network multinode-933566 192.168.58.0/24 created
	I0108 20:33:44.915056  702522 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-933566" container
	I0108 20:33:44.915134  702522 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:33:44.931309  702522 cli_runner.go:164] Run: docker volume create multinode-933566 --label name.minikube.sigs.k8s.io=multinode-933566 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:33:44.949586  702522 oci.go:103] Successfully created a docker volume multinode-933566
	I0108 20:33:44.949677  702522 cli_runner.go:164] Run: docker run --rm --name multinode-933566-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-933566 --entrypoint /usr/bin/test -v multinode-933566:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:33:45.567574  702522 oci.go:107] Successfully prepared a docker volume multinode-933566
	I0108 20:33:45.567629  702522 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:33:45.567649  702522 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:33:45.567726  702522 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-933566:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:33:49.908444  702522 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-933566:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.340671649s)
	I0108 20:33:49.908478  702522 kic.go:203] duration metric: took 4.340825 seconds to extract preloaded images to volume
	W0108 20:33:49.908619  702522 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:33:49.908763  702522 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:33:49.973509  702522 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-933566 --name multinode-933566 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-933566 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-933566 --network multinode-933566 --ip 192.168.58.2 --volume multinode-933566:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:33:50.355362  702522 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Running}}
	I0108 20:33:50.377620  702522 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Status}}
	I0108 20:33:50.405117  702522 cli_runner.go:164] Run: docker exec multinode-933566 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:33:50.477677  702522 oci.go:144] the created container "multinode-933566" has a running status.
	I0108 20:33:50.477704  702522 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa...
	I0108 20:33:50.828779  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 20:33:50.828859  702522 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:33:50.862549  702522 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Status}}
	I0108 20:33:50.903481  702522 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:33:50.903500  702522 kic_runner.go:114] Args: [docker exec --privileged multinode-933566 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:33:51.005891  702522 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Status}}
	I0108 20:33:51.039873  702522 machine.go:88] provisioning docker machine ...
	I0108 20:33:51.039907  702522 ubuntu.go:169] provisioning hostname "multinode-933566"
	I0108 20:33:51.039983  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:33:51.070927  702522 main.go:141] libmachine: Using SSH client type: native
	I0108 20:33:51.071367  702522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33479 <nil> <nil>}
	I0108 20:33:51.071391  702522 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-933566 && echo "multinode-933566" | sudo tee /etc/hostname
	I0108 20:33:51.072173  702522 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38660->127.0.0.1:33479: read: connection reset by peer
	I0108 20:33:54.225571  702522 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-933566
	
	I0108 20:33:54.225691  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:33:54.243783  702522 main.go:141] libmachine: Using SSH client type: native
	I0108 20:33:54.244197  702522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33479 <nil> <nil>}
	I0108 20:33:54.244221  702522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-933566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-933566/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-933566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:33:54.383764  702522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:33:54.383794  702522 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-633350/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-633350/.minikube}
	I0108 20:33:54.383814  702522 ubuntu.go:177] setting up certificates
	I0108 20:33:54.383826  702522 provision.go:83] configureAuth start
	I0108 20:33:54.383887  702522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933566
	I0108 20:33:54.402750  702522 provision.go:138] copyHostCerts
	I0108 20:33:54.402795  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:33:54.402837  702522 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem, removing ...
	I0108 20:33:54.402849  702522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:33:54.402927  702522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem (1082 bytes)
	I0108 20:33:54.403008  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:33:54.403029  702522 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem, removing ...
	I0108 20:33:54.403043  702522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:33:54.403086  702522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem (1123 bytes)
	I0108 20:33:54.403134  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:33:54.403152  702522 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem, removing ...
	I0108 20:33:54.403156  702522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:33:54.403182  702522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem (1679 bytes)
	I0108 20:33:54.403233  702522 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem org=jenkins.multinode-933566 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-933566]
	I0108 20:33:54.792138  702522 provision.go:172] copyRemoteCerts
	I0108 20:33:54.792210  702522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:33:54.792255  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:33:54.809681  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:33:54.913038  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:33:54.913123  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:33:54.942394  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:33:54.942617  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 20:33:54.971321  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:33:54.971383  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:33:55.000921  702522 provision.go:86] duration metric: configureAuth took 617.079502ms
	I0108 20:33:55.000947  702522 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:33:55.001144  702522 config.go:182] Loaded profile config "multinode-933566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:33:55.001259  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:33:55.020600  702522 main.go:141] libmachine: Using SSH client type: native
	I0108 20:33:55.021040  702522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33479 <nil> <nil>}
	I0108 20:33:55.021063  702522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:33:55.275939  702522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:33:55.276021  702522 machine.go:91] provisioned docker machine in 4.236125851s
	I0108 20:33:55.276038  702522 client.go:171] LocalClient.Create took 10.480563354s
	I0108 20:33:55.276056  702522 start.go:167] duration metric: libmachine.API.Create for "multinode-933566" took 10.480624433s
	I0108 20:33:55.276067  702522 start.go:300] post-start starting for "multinode-933566" (driver="docker")
	I0108 20:33:55.276077  702522 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:33:55.276151  702522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:33:55.276194  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:33:55.294556  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:33:55.393323  702522 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:33:55.397300  702522 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 20:33:55.397318  702522 command_runner.go:130] > NAME="Ubuntu"
	I0108 20:33:55.397326  702522 command_runner.go:130] > VERSION_ID="22.04"
	I0108 20:33:55.397332  702522 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 20:33:55.397338  702522 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 20:33:55.397342  702522 command_runner.go:130] > ID=ubuntu
	I0108 20:33:55.397348  702522 command_runner.go:130] > ID_LIKE=debian
	I0108 20:33:55.397354  702522 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 20:33:55.397381  702522 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 20:33:55.397395  702522 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 20:33:55.397404  702522 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 20:33:55.397410  702522 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 20:33:55.397490  702522 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:33:55.397520  702522 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:33:55.397532  702522 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:33:55.397539  702522 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:33:55.397559  702522 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/addons for local assets ...
	I0108 20:33:55.397613  702522 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/files for local assets ...
	I0108 20:33:55.397693  702522 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> 6387322.pem in /etc/ssl/certs
	I0108 20:33:55.397702  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> /etc/ssl/certs/6387322.pem
	I0108 20:33:55.397800  702522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:33:55.408173  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:33:55.436362  702522 start.go:303] post-start completed in 160.28035ms
	I0108 20:33:55.436721  702522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933566
	I0108 20:33:55.457048  702522 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/config.json ...
	I0108 20:33:55.457319  702522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:33:55.457366  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:33:55.479381  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:33:55.572303  702522 command_runner.go:130] > 14%!
	(MISSING)I0108 20:33:55.572395  702522 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:33:55.577757  702522 command_runner.go:130] > 168G
	I0108 20:33:55.577797  702522 start.go:128] duration metric: createHost completed in 10.786920431s
	I0108 20:33:55.577808  702522 start.go:83] releasing machines lock for "multinode-933566", held for 10.787030586s
	I0108 20:33:55.577881  702522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933566
	I0108 20:33:55.594641  702522 ssh_runner.go:195] Run: cat /version.json
	I0108 20:33:55.594709  702522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:33:55.594721  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:33:55.594768  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:33:55.614015  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:33:55.630198  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:33:55.838116  702522 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:33:55.841193  702522 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1703498848-17857", "minikube_version": "v1.32.0", "commit": "d18dc8d014b22564d2860ddb02a821a21df70433"}
	I0108 20:33:55.841368  702522 ssh_runner.go:195] Run: systemctl --version
	I0108 20:33:55.846645  702522 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0108 20:33:55.846714  702522 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0108 20:33:55.846807  702522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:33:55.989764  702522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:33:55.994849  702522 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 20:33:55.994881  702522 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 20:33:55.994890  702522 command_runner.go:130] > Device: 3ah/58d	Inode: 1568593     Links: 1
	I0108 20:33:55.994898  702522 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:33:55.994905  702522 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 20:33:55.994912  702522 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 20:33:55.994918  702522 command_runner.go:130] > Change: 2024-01-08 20:10:27.324654076 +0000
	I0108 20:33:55.994928  702522 command_runner.go:130] >  Birth: 2024-01-08 20:10:27.324654076 +0000
	I0108 20:33:55.995287  702522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:33:56.019672  702522 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:33:56.019824  702522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:33:56.060280  702522 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 20:33:56.060316  702522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:33:56.060324  702522 start.go:475] detecting cgroup driver to use...
	I0108 20:33:56.060355  702522 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:33:56.060413  702522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:33:56.078387  702522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:33:56.092006  702522 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:33:56.092069  702522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:33:56.107608  702522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:33:56.124039  702522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:33:56.215880  702522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:33:56.315183  702522 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 20:33:56.315220  702522 docker.go:233] disabling docker service ...
	I0108 20:33:56.315304  702522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:33:56.338249  702522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:33:56.352364  702522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:33:56.451982  702522 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 20:33:56.452068  702522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:33:56.553053  702522 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 20:33:56.553129  702522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:33:56.566035  702522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:33:56.584661  702522 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:33:56.586051  702522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:33:56.586115  702522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:33:56.598155  702522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:33:56.598228  702522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:33:56.610132  702522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:33:56.621463  702522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:33:56.633033  702522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:33:56.643883  702522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:33:56.654141  702522 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 20:33:56.654242  702522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:33:56.664213  702522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:33:56.749305  702522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:33:56.874269  702522 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:33:56.874378  702522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:33:56.878891  702522 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:33:56.878954  702522 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:33:56.878979  702522 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I0108 20:33:56.879002  702522 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:33:56.879026  702522 command_runner.go:130] > Access: 2024-01-08 20:33:56.859794942 +0000
	I0108 20:33:56.879054  702522 command_runner.go:130] > Modify: 2024-01-08 20:33:56.859794942 +0000
	I0108 20:33:56.879074  702522 command_runner.go:130] > Change: 2024-01-08 20:33:56.859794942 +0000
	I0108 20:33:56.879091  702522 command_runner.go:130] >  Birth: -
	I0108 20:33:56.879357  702522 start.go:543] Will wait 60s for crictl version
	I0108 20:33:56.879440  702522 ssh_runner.go:195] Run: which crictl
	I0108 20:33:56.883493  702522 command_runner.go:130] > /usr/bin/crictl
	I0108 20:33:56.883948  702522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:33:56.923429  702522 command_runner.go:130] > Version:  0.1.0
	I0108 20:33:56.923505  702522 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:33:56.923526  702522 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 20:33:56.923548  702522 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:33:56.926028  702522 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 20:33:56.926138  702522 ssh_runner.go:195] Run: crio --version
	I0108 20:33:56.964958  702522 command_runner.go:130] > crio version 1.24.6
	I0108 20:33:56.965027  702522 command_runner.go:130] > Version:          1.24.6
	I0108 20:33:56.965051  702522 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 20:33:56.965072  702522 command_runner.go:130] > GitTreeState:     clean
	I0108 20:33:56.965095  702522 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 20:33:56.965123  702522 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 20:33:56.965145  702522 command_runner.go:130] > Compiler:         gc
	I0108 20:33:56.965167  702522 command_runner.go:130] > Platform:         linux/arm64
	I0108 20:33:56.965188  702522 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:33:56.965213  702522 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:33:56.965235  702522 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:33:56.965256  702522 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:33:56.967409  702522 ssh_runner.go:195] Run: crio --version
	I0108 20:33:57.009540  702522 command_runner.go:130] > crio version 1.24.6
	I0108 20:33:57.009563  702522 command_runner.go:130] > Version:          1.24.6
	I0108 20:33:57.009573  702522 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 20:33:57.009579  702522 command_runner.go:130] > GitTreeState:     clean
	I0108 20:33:57.009586  702522 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 20:33:57.009602  702522 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 20:33:57.009610  702522 command_runner.go:130] > Compiler:         gc
	I0108 20:33:57.009616  702522 command_runner.go:130] > Platform:         linux/arm64
	I0108 20:33:57.009627  702522 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:33:57.009637  702522 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:33:57.009645  702522 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:33:57.009650  702522 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:33:57.012107  702522 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 20:33:57.013979  702522 cli_runner.go:164] Run: docker network inspect multinode-933566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:33:57.031776  702522 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 20:33:57.036626  702522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:33:57.050409  702522 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:33:57.050499  702522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:33:57.119948  702522 command_runner.go:130] > {
	I0108 20:33:57.119970  702522 command_runner.go:130] >   "images": [
	I0108 20:33:57.119983  702522 command_runner.go:130] >     {
	I0108 20:33:57.119993  702522 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0108 20:33:57.119999  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.120007  702522 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 20:33:57.120012  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120018  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.120036  702522 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 20:33:57.120048  702522 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0108 20:33:57.120053  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120059  702522 command_runner.go:130] >       "size": "60867618",
	I0108 20:33:57.120064  702522 command_runner.go:130] >       "uid": null,
	I0108 20:33:57.120074  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.120085  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.120095  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.120099  702522 command_runner.go:130] >     },
	I0108 20:33:57.120104  702522 command_runner.go:130] >     {
	I0108 20:33:57.120114  702522 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0108 20:33:57.120119  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.120127  702522 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 20:33:57.120132  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120138  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.120150  702522 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0108 20:33:57.120171  702522 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0108 20:33:57.120180  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120188  702522 command_runner.go:130] >       "size": "29037500",
	I0108 20:33:57.120195  702522 command_runner.go:130] >       "uid": null,
	I0108 20:33:57.120200  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.120205  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.120211  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.120218  702522 command_runner.go:130] >     },
	I0108 20:33:57.120232  702522 command_runner.go:130] >     {
	I0108 20:33:57.120240  702522 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0108 20:33:57.120250  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.120258  702522 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 20:33:57.120267  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120273  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.120289  702522 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0108 20:33:57.120304  702522 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0108 20:33:57.120309  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120315  702522 command_runner.go:130] >       "size": "51393451",
	I0108 20:33:57.120322  702522 command_runner.go:130] >       "uid": null,
	I0108 20:33:57.120332  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.120343  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.120353  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.120361  702522 command_runner.go:130] >     },
	I0108 20:33:57.120376  702522 command_runner.go:130] >     {
	I0108 20:33:57.120385  702522 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0108 20:33:57.120397  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.120404  702522 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 20:33:57.120415  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120422  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.120436  702522 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0108 20:33:57.120449  702522 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0108 20:33:57.120459  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120471  702522 command_runner.go:130] >       "size": "182203183",
	I0108 20:33:57.120476  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.120482  702522 command_runner.go:130] >         "value": "0"
	I0108 20:33:57.120493  702522 command_runner.go:130] >       },
	I0108 20:33:57.120502  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.120519  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.120525  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.120538  702522 command_runner.go:130] >     },
	I0108 20:33:57.120544  702522 command_runner.go:130] >     {
	I0108 20:33:57.120554  702522 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0108 20:33:57.120559  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.120571  702522 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 20:33:57.120581  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120589  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.120602  702522 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0108 20:33:57.120615  702522 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0108 20:33:57.120620  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120625  702522 command_runner.go:130] >       "size": "121119694",
	I0108 20:33:57.120633  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.120640  702522 command_runner.go:130] >         "value": "0"
	I0108 20:33:57.120645  702522 command_runner.go:130] >       },
	I0108 20:33:57.120650  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.120658  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.120668  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.120673  702522 command_runner.go:130] >     },
	I0108 20:33:57.120681  702522 command_runner.go:130] >     {
	I0108 20:33:57.120689  702522 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0108 20:33:57.120698  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.120705  702522 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 20:33:57.120714  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120721  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.120731  702522 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 20:33:57.120741  702522 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0108 20:33:57.120745  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120751  702522 command_runner.go:130] >       "size": "117252916",
	I0108 20:33:57.120756  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.120765  702522 command_runner.go:130] >         "value": "0"
	I0108 20:33:57.120773  702522 command_runner.go:130] >       },
	I0108 20:33:57.120778  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.120784  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.120793  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.120798  702522 command_runner.go:130] >     },
	I0108 20:33:57.120806  702522 command_runner.go:130] >     {
	I0108 20:33:57.120815  702522 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0108 20:33:57.120820  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.120829  702522 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 20:33:57.120841  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120846  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.120856  702522 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0108 20:33:57.120869  702522 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 20:33:57.120877  702522 command_runner.go:130] >       ],
	I0108 20:33:57.120883  702522 command_runner.go:130] >       "size": "69992343",
	I0108 20:33:57.120904  702522 command_runner.go:130] >       "uid": null,
	I0108 20:33:57.120910  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.120939  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.120944  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.120949  702522 command_runner.go:130] >     },
	I0108 20:33:57.120961  702522 command_runner.go:130] >     {
	I0108 20:33:57.120972  702522 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0108 20:33:57.120989  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.120997  702522 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 20:33:57.121002  702522 command_runner.go:130] >       ],
	I0108 20:33:57.121009  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.121032  702522 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 20:33:57.121046  702522 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0108 20:33:57.121053  702522 command_runner.go:130] >       ],
	I0108 20:33:57.121059  702522 command_runner.go:130] >       "size": "59253556",
	I0108 20:33:57.121067  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.121073  702522 command_runner.go:130] >         "value": "0"
	I0108 20:33:57.121078  702522 command_runner.go:130] >       },
	I0108 20:33:57.121083  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.121090  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.121097  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.121104  702522 command_runner.go:130] >     },
	I0108 20:33:57.121109  702522 command_runner.go:130] >     {
	I0108 20:33:57.121120  702522 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0108 20:33:57.121129  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.121136  702522 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 20:33:57.121140  702522 command_runner.go:130] >       ],
	I0108 20:33:57.121145  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.121158  702522 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0108 20:33:57.121170  702522 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0108 20:33:57.121175  702522 command_runner.go:130] >       ],
	I0108 20:33:57.121180  702522 command_runner.go:130] >       "size": "520014",
	I0108 20:33:57.121185  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.121191  702522 command_runner.go:130] >         "value": "65535"
	I0108 20:33:57.121201  702522 command_runner.go:130] >       },
	I0108 20:33:57.121210  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.121215  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.121221  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.121230  702522 command_runner.go:130] >     }
	I0108 20:33:57.121234  702522 command_runner.go:130] >   ]
	I0108 20:33:57.121242  702522 command_runner.go:130] > }
	I0108 20:33:57.124021  702522 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:33:57.124044  702522 crio.go:415] Images already preloaded, skipping extraction
	I0108 20:33:57.124099  702522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:33:57.168313  702522 command_runner.go:130] > {
	I0108 20:33:57.168390  702522 command_runner.go:130] >   "images": [
	I0108 20:33:57.168402  702522 command_runner.go:130] >     {
	I0108 20:33:57.168417  702522 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0108 20:33:57.168423  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.168430  702522 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 20:33:57.168460  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168473  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.168484  702522 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 20:33:57.168507  702522 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0108 20:33:57.168515  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168520  702522 command_runner.go:130] >       "size": "60867618",
	I0108 20:33:57.168537  702522 command_runner.go:130] >       "uid": null,
	I0108 20:33:57.168553  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.168575  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.168587  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.168591  702522 command_runner.go:130] >     },
	I0108 20:33:57.168596  702522 command_runner.go:130] >     {
	I0108 20:33:57.168607  702522 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0108 20:33:57.168612  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.168619  702522 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 20:33:57.168624  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168629  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.168650  702522 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0108 20:33:57.168662  702522 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0108 20:33:57.168667  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168673  702522 command_runner.go:130] >       "size": "29037500",
	I0108 20:33:57.168678  702522 command_runner.go:130] >       "uid": null,
	I0108 20:33:57.168683  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.168687  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.168692  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.168698  702522 command_runner.go:130] >     },
	I0108 20:33:57.168706  702522 command_runner.go:130] >     {
	I0108 20:33:57.168714  702522 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0108 20:33:57.168728  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.168735  702522 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 20:33:57.168739  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168745  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.168756  702522 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0108 20:33:57.168769  702522 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0108 20:33:57.168774  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168780  702522 command_runner.go:130] >       "size": "51393451",
	I0108 20:33:57.168788  702522 command_runner.go:130] >       "uid": null,
	I0108 20:33:57.168793  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.168799  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.168806  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.168812  702522 command_runner.go:130] >     },
	I0108 20:33:57.168816  702522 command_runner.go:130] >     {
	I0108 20:33:57.168827  702522 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0108 20:33:57.168837  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.168844  702522 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 20:33:57.168849  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168856  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.168865  702522 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0108 20:33:57.168874  702522 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0108 20:33:57.168883  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168891  702522 command_runner.go:130] >       "size": "182203183",
	I0108 20:33:57.168896  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.168901  702522 command_runner.go:130] >         "value": "0"
	I0108 20:33:57.168908  702522 command_runner.go:130] >       },
	I0108 20:33:57.168913  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.168918  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.168927  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.168931  702522 command_runner.go:130] >     },
	I0108 20:33:57.168936  702522 command_runner.go:130] >     {
	I0108 20:33:57.168946  702522 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0108 20:33:57.168951  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.168958  702522 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 20:33:57.168966  702522 command_runner.go:130] >       ],
	I0108 20:33:57.168974  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.168983  702522 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0108 20:33:57.168992  702522 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0108 20:33:57.169000  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169005  702522 command_runner.go:130] >       "size": "121119694",
	I0108 20:33:57.169010  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.169017  702522 command_runner.go:130] >         "value": "0"
	I0108 20:33:57.169021  702522 command_runner.go:130] >       },
	I0108 20:33:57.169029  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.169033  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.169038  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.169042  702522 command_runner.go:130] >     },
	I0108 20:33:57.169047  702522 command_runner.go:130] >     {
	I0108 20:33:57.169054  702522 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0108 20:33:57.169061  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.169068  702522 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 20:33:57.169077  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169083  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.169092  702522 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 20:33:57.169105  702522 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0108 20:33:57.169110  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169116  702522 command_runner.go:130] >       "size": "117252916",
	I0108 20:33:57.169120  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.169126  702522 command_runner.go:130] >         "value": "0"
	I0108 20:33:57.169130  702522 command_runner.go:130] >       },
	I0108 20:33:57.169138  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.169144  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.169150  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.169157  702522 command_runner.go:130] >     },
	I0108 20:33:57.169161  702522 command_runner.go:130] >     {
	I0108 20:33:57.169169  702522 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0108 20:33:57.169176  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.169182  702522 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 20:33:57.169187  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169196  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.169205  702522 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0108 20:33:57.169214  702522 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 20:33:57.169221  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169229  702522 command_runner.go:130] >       "size": "69992343",
	I0108 20:33:57.169234  702522 command_runner.go:130] >       "uid": null,
	I0108 20:33:57.169241  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.169246  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.169250  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.169260  702522 command_runner.go:130] >     },
	I0108 20:33:57.169268  702522 command_runner.go:130] >     {
	I0108 20:33:57.169278  702522 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0108 20:33:57.169283  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.169290  702522 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 20:33:57.169294  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169302  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.169324  702522 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 20:33:57.169341  702522 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0108 20:33:57.169350  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169362  702522 command_runner.go:130] >       "size": "59253556",
	I0108 20:33:57.169367  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.169375  702522 command_runner.go:130] >         "value": "0"
	I0108 20:33:57.169380  702522 command_runner.go:130] >       },
	I0108 20:33:57.169390  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.169398  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.169406  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.169411  702522 command_runner.go:130] >     },
	I0108 20:33:57.169417  702522 command_runner.go:130] >     {
	I0108 20:33:57.169425  702522 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0108 20:33:57.169433  702522 command_runner.go:130] >       "repoTags": [
	I0108 20:33:57.169439  702522 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 20:33:57.169443  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169448  702522 command_runner.go:130] >       "repoDigests": [
	I0108 20:33:57.169460  702522 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0108 20:33:57.169469  702522 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0108 20:33:57.169482  702522 command_runner.go:130] >       ],
	I0108 20:33:57.169489  702522 command_runner.go:130] >       "size": "520014",
	I0108 20:33:57.169497  702522 command_runner.go:130] >       "uid": {
	I0108 20:33:57.169509  702522 command_runner.go:130] >         "value": "65535"
	I0108 20:33:57.169514  702522 command_runner.go:130] >       },
	I0108 20:33:57.169521  702522 command_runner.go:130] >       "username": "",
	I0108 20:33:57.169526  702522 command_runner.go:130] >       "spec": null,
	I0108 20:33:57.169534  702522 command_runner.go:130] >       "pinned": false
	I0108 20:33:57.169544  702522 command_runner.go:130] >     }
	I0108 20:33:57.169549  702522 command_runner.go:130] >   ]
	I0108 20:33:57.169555  702522 command_runner.go:130] > }
	I0108 20:33:57.169712  702522 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:33:57.169723  702522 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:33:57.169809  702522 ssh_runner.go:195] Run: crio config
	I0108 20:33:57.229039  702522 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:33:57.229062  702522 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:33:57.229071  702522 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:33:57.229075  702522 command_runner.go:130] > #
	I0108 20:33:57.229083  702522 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:33:57.229091  702522 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:33:57.229099  702522 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:33:57.229110  702522 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:33:57.229115  702522 command_runner.go:130] > # reload'.
	I0108 20:33:57.229123  702522 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:33:57.229131  702522 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:33:57.229139  702522 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:33:57.229146  702522 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:33:57.229150  702522 command_runner.go:130] > [crio]
	I0108 20:33:57.229158  702522 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:33:57.229166  702522 command_runner.go:130] > # containers images, in this directory.
	I0108 20:33:57.229436  702522 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 20:33:57.229453  702522 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:33:57.229461  702522 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 20:33:57.229469  702522 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:33:57.229476  702522 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:33:57.229521  702522 command_runner.go:130] > # storage_driver = "vfs"
	I0108 20:33:57.229534  702522 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:33:57.229541  702522 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:33:57.229551  702522 command_runner.go:130] > # storage_option = [
	I0108 20:33:57.229556  702522 command_runner.go:130] > # ]
	I0108 20:33:57.229564  702522 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:33:57.229571  702522 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:33:57.229577  702522 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:33:57.229583  702522 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:33:57.229591  702522 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:33:57.229596  702522 command_runner.go:130] > # always happen on a node reboot
	I0108 20:33:57.229602  702522 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:33:57.229609  702522 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:33:57.229616  702522 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:33:57.229629  702522 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:33:57.229636  702522 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:33:57.229645  702522 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:33:57.229654  702522 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:33:57.229659  702522 command_runner.go:130] > # internal_wipe = true
	I0108 20:33:57.229666  702522 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:33:57.229673  702522 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:33:57.229681  702522 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:33:57.229687  702522 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:33:57.229696  702522 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:33:57.229700  702522 command_runner.go:130] > [crio.api]
	I0108 20:33:57.229707  702522 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:33:57.229712  702522 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:33:57.229719  702522 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:33:57.229724  702522 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:33:57.229732  702522 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:33:57.229738  702522 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:33:57.229743  702522 command_runner.go:130] > # stream_port = "0"
	I0108 20:33:57.229749  702522 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:33:57.229756  702522 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:33:57.229763  702522 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:33:57.229768  702522 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:33:57.229776  702522 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:33:57.229783  702522 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:33:57.229788  702522 command_runner.go:130] > # minutes.
	I0108 20:33:57.229794  702522 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:33:57.229802  702522 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:33:57.229809  702522 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:33:57.229814  702522 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:33:57.229821  702522 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:33:57.229829  702522 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:33:57.229835  702522 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:33:57.229840  702522 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:33:57.229849  702522 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:33:57.229855  702522 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 20:33:57.229863  702522 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:33:57.229869  702522 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 20:33:57.229888  702522 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:33:57.229895  702522 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:33:57.229900  702522 command_runner.go:130] > [crio.runtime]
	I0108 20:33:57.229907  702522 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:33:57.229915  702522 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:33:57.229920  702522 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:33:57.229929  702522 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:33:57.229934  702522 command_runner.go:130] > # default_ulimits = [
	I0108 20:33:57.229938  702522 command_runner.go:130] > # ]
	I0108 20:33:57.229946  702522 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:33:57.229952  702522 command_runner.go:130] > # no_pivot = false
	I0108 20:33:57.229959  702522 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:33:57.229966  702522 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:33:57.229972  702522 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:33:57.229979  702522 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:33:57.229985  702522 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:33:57.229994  702522 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:33:57.229999  702522 command_runner.go:130] > # conmon = ""
	I0108 20:33:57.230004  702522 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:33:57.230012  702522 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:33:57.230018  702522 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:33:57.230026  702522 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:33:57.230032  702522 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:33:57.230040  702522 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:33:57.230046  702522 command_runner.go:130] > # conmon_env = [
	I0108 20:33:57.230050  702522 command_runner.go:130] > # ]
	I0108 20:33:57.230057  702522 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:33:57.230063  702522 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:33:57.230074  702522 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:33:57.230078  702522 command_runner.go:130] > # default_env = [
	I0108 20:33:57.230083  702522 command_runner.go:130] > # ]
	I0108 20:33:57.230091  702522 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:33:57.230105  702522 command_runner.go:130] > # selinux = false
	I0108 20:33:57.230114  702522 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:33:57.230127  702522 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:33:57.230135  702522 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:33:57.230141  702522 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:33:57.230149  702522 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:33:57.230158  702522 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:33:57.230166  702522 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:33:57.230172  702522 command_runner.go:130] > # which might increase security.
	I0108 20:33:57.230177  702522 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 20:33:57.230186  702522 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:33:57.230194  702522 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:33:57.230201  702522 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:33:57.230209  702522 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:33:57.230215  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:33:57.230220  702522 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:33:57.230228  702522 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:33:57.230234  702522 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:33:57.230239  702522 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:33:57.230246  702522 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:33:57.230251  702522 command_runner.go:130] > # irqbalance daemon.
	I0108 20:33:57.230260  702522 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:33:57.230268  702522 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:33:57.230274  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:33:57.230279  702522 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:33:57.230286  702522 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:33:57.230291  702522 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:33:57.230299  702522 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:33:57.230305  702522 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:33:57.230313  702522 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:33:57.230320  702522 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:33:57.230325  702522 command_runner.go:130] > # will be added.
	I0108 20:33:57.230330  702522 command_runner.go:130] > # default_capabilities = [
	I0108 20:33:57.230630  702522 command_runner.go:130] > # 	"CHOWN",
	I0108 20:33:57.230643  702522 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:33:57.230648  702522 command_runner.go:130] > # 	"FSETID",
	I0108 20:33:57.230653  702522 command_runner.go:130] > # 	"FOWNER",
	I0108 20:33:57.230657  702522 command_runner.go:130] > # 	"SETGID",
	I0108 20:33:57.230662  702522 command_runner.go:130] > # 	"SETUID",
	I0108 20:33:57.230666  702522 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:33:57.230675  702522 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:33:57.230679  702522 command_runner.go:130] > # 	"KILL",
	I0108 20:33:57.230686  702522 command_runner.go:130] > # ]
	I0108 20:33:57.230695  702522 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 20:33:57.230704  702522 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 20:33:57.230710  702522 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 20:33:57.230721  702522 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:33:57.230731  702522 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:33:57.230736  702522 command_runner.go:130] > # default_sysctls = [
	I0108 20:33:57.230740  702522 command_runner.go:130] > # ]
	I0108 20:33:57.230748  702522 command_runner.go:130] > # List of devices on the host that a
	I0108 20:33:57.230758  702522 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:33:57.230763  702522 command_runner.go:130] > # allowed_devices = [
	I0108 20:33:57.230775  702522 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:33:57.230779  702522 command_runner.go:130] > # ]
	I0108 20:33:57.230785  702522 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:33:57.230816  702522 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:33:57.230826  702522 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:33:57.230834  702522 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:33:57.230844  702522 command_runner.go:130] > # additional_devices = [
	I0108 20:33:57.230852  702522 command_runner.go:130] > # ]
	I0108 20:33:57.230859  702522 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:33:57.230863  702522 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:33:57.230868  702522 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:33:57.230875  702522 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:33:57.230882  702522 command_runner.go:130] > # ]
	I0108 20:33:57.230890  702522 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:33:57.230897  702522 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:33:57.230902  702522 command_runner.go:130] > # Defaults to false.
	I0108 20:33:57.230908  702522 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:33:57.230919  702522 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:33:57.230929  702522 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:33:57.230933  702522 command_runner.go:130] > # hooks_dir = [
	I0108 20:33:57.230939  702522 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:33:57.230943  702522 command_runner.go:130] > # ]
	I0108 20:33:57.230950  702522 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:33:57.230958  702522 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:33:57.230966  702522 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:33:57.230970  702522 command_runner.go:130] > #
	I0108 20:33:57.230977  702522 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:33:57.230985  702522 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:33:57.230992  702522 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:33:57.230997  702522 command_runner.go:130] > #
	I0108 20:33:57.231004  702522 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:33:57.231012  702522 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:33:57.231022  702522 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:33:57.231028  702522 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:33:57.231036  702522 command_runner.go:130] > #
	I0108 20:33:57.231041  702522 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:33:57.231048  702522 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:33:57.231059  702522 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:33:57.231064  702522 command_runner.go:130] > # pids_limit = 0
	I0108 20:33:57.231074  702522 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:33:57.231081  702522 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:33:57.231089  702522 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:33:57.231098  702522 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:33:57.231107  702522 command_runner.go:130] > # log_size_max = -1
	I0108 20:33:57.231115  702522 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:33:57.231122  702522 command_runner.go:130] > # log_to_journald = false
	I0108 20:33:57.231129  702522 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:33:57.231140  702522 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:33:57.231147  702522 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:33:57.231156  702522 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:33:57.231162  702522 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:33:57.231168  702522 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:33:57.231174  702522 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:33:57.231181  702522 command_runner.go:130] > # read_only = false
	I0108 20:33:57.231191  702522 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:33:57.231199  702522 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:33:57.231207  702522 command_runner.go:130] > # live configuration reload.
	I0108 20:33:57.231211  702522 command_runner.go:130] > # log_level = "info"
	I0108 20:33:57.231218  702522 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:33:57.231226  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:33:57.231233  702522 command_runner.go:130] > # log_filter = ""
	I0108 20:33:57.231243  702522 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:33:57.231252  702522 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:33:57.231257  702522 command_runner.go:130] > # separated by comma.
	I0108 20:33:57.231262  702522 command_runner.go:130] > # uid_mappings = ""
	I0108 20:33:57.231274  702522 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:33:57.231284  702522 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:33:57.231289  702522 command_runner.go:130] > # separated by comma.
	I0108 20:33:57.231293  702522 command_runner.go:130] > # gid_mappings = ""
	I0108 20:33:57.231304  702522 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:33:57.231312  702522 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:33:57.231324  702522 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:33:57.231329  702522 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:33:57.231337  702522 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:33:57.231346  702522 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:33:57.231357  702522 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:33:57.231362  702522 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:33:57.231370  702522 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:33:57.231380  702522 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:33:57.231387  702522 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:33:57.231396  702522 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:33:57.231403  702522 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:33:57.231412  702522 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:33:57.231422  702522 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:33:57.231430  702522 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:33:57.231438  702522 command_runner.go:130] > # drop_infra_ctr = true
	I0108 20:33:57.231445  702522 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:33:57.231455  702522 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:33:57.231463  702522 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:33:57.231471  702522 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:33:57.231478  702522 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:33:57.231487  702522 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:33:57.231492  702522 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:33:57.231500  702522 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:33:57.231505  702522 command_runner.go:130] > # pinns_path = ""
	I0108 20:33:57.231515  702522 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:33:57.231525  702522 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:33:57.231533  702522 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:33:57.231538  702522 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:33:57.231547  702522 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:33:57.231556  702522 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:33:57.231573  702522 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:33:57.231579  702522 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:33:57.231589  702522 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:33:57.231598  702522 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:33:57.231604  702522 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:33:57.231610  702522 command_runner.go:130] > # ]
	I0108 20:33:57.231618  702522 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:33:57.231625  702522 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:33:57.231635  702522 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:33:57.231643  702522 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:33:57.231650  702522 command_runner.go:130] > #
	I0108 20:33:57.231655  702522 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:33:57.231661  702522 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:33:57.231666  702522 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:33:57.231672  702522 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:33:57.231680  702522 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:33:57.231693  702522 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:33:57.231701  702522 command_runner.go:130] > # Where:
	I0108 20:33:57.231713  702522 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:33:57.231726  702522 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:33:57.231734  702522 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:33:57.231744  702522 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:33:57.231749  702522 command_runner.go:130] > #   in $PATH.
	I0108 20:33:57.231763  702522 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:33:57.231772  702522 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:33:57.231779  702522 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:33:57.231784  702522 command_runner.go:130] > #   state.
	I0108 20:33:57.231795  702522 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:33:57.231806  702522 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:33:57.231817  702522 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:33:57.231823  702522 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:33:57.231831  702522 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:33:57.231839  702522 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:33:57.231850  702522 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:33:57.231860  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:33:57.231869  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:33:57.231881  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:33:57.231889  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:33:57.231904  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:33:57.231912  702522 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:33:57.231922  702522 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:33:57.231933  702522 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:33:57.231941  702522 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:33:57.231946  702522 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:33:57.231959  702522 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 20:33:57.231964  702522 command_runner.go:130] > runtime_type = "oci"
	I0108 20:33:57.231972  702522 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:33:57.231977  702522 command_runner.go:130] > runtime_config_path = ""
	I0108 20:33:57.231985  702522 command_runner.go:130] > monitor_path = ""
	I0108 20:33:57.231990  702522 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:33:57.231997  702522 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:33:57.232031  702522 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:33:57.232040  702522 command_runner.go:130] > # running containers
	I0108 20:33:57.232045  702522 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:33:57.232055  702522 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:33:57.232067  702522 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:33:57.232074  702522 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:33:57.232080  702522 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:33:57.232085  702522 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:33:57.232091  702522 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:33:57.232106  702522 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:33:57.232112  702522 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:33:57.232121  702522 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:33:57.232132  702522 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:33:57.232144  702522 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:33:57.232157  702522 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:33:57.232169  702522 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:33:57.232179  702522 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:33:57.232192  702522 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:33:57.232203  702522 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:33:57.232223  702522 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:33:57.232234  702522 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:33:57.232244  702522 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:33:57.232252  702522 command_runner.go:130] > # Example:
	I0108 20:33:57.232261  702522 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:33:57.232269  702522 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:33:57.232275  702522 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:33:57.232283  702522 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:33:57.232288  702522 command_runner.go:130] > # cpuset = 0
	I0108 20:33:57.232292  702522 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:33:57.232297  702522 command_runner.go:130] > # Where:
	I0108 20:33:57.232305  702522 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:33:57.232317  702522 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:33:57.232326  702522 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:33:57.232336  702522 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:33:57.232348  702522 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:33:57.232360  702522 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:33:57.232364  702522 command_runner.go:130] > # 
	I0108 20:33:57.232372  702522 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:33:57.232379  702522 command_runner.go:130] > #
	I0108 20:33:57.232391  702522 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:33:57.232403  702522 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:33:57.232410  702522 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:33:57.232418  702522 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:33:57.232428  702522 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:33:57.232438  702522 command_runner.go:130] > [crio.image]
	I0108 20:33:57.232448  702522 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:33:57.232459  702522 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:33:57.232470  702522 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:33:57.232481  702522 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:33:57.232486  702522 command_runner.go:130] > # global_auth_file = ""
	I0108 20:33:57.232495  702522 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:33:57.232501  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:33:57.232509  702522 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:33:57.232518  702522 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:33:57.232528  702522 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:33:57.232534  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:33:57.232543  702522 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:33:57.232554  702522 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:33:57.232565  702522 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:33:57.232573  702522 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:33:57.232580  702522 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:33:57.232585  702522 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:33:57.232592  702522 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:33:57.232603  702522 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:33:57.232610  702522 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:33:57.232620  702522 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:33:57.232626  702522 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:33:57.232634  702522 command_runner.go:130] > # signature_policy = ""
	I0108 20:33:57.232641  702522 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:33:57.232651  702522 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:33:57.232656  702522 command_runner.go:130] > # changing them here.
	I0108 20:33:57.232661  702522 command_runner.go:130] > # insecure_registries = [
	I0108 20:33:57.232665  702522 command_runner.go:130] > # ]
	I0108 20:33:57.232672  702522 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:33:57.232685  702522 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:33:57.232694  702522 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:33:57.232704  702522 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:33:57.232709  702522 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:33:57.232718  702522 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:33:57.232725  702522 command_runner.go:130] > # CNI plugins.
	I0108 20:33:57.232733  702522 command_runner.go:130] > [crio.network]
	I0108 20:33:57.232740  702522 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:33:57.232747  702522 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:33:57.232755  702522 command_runner.go:130] > # cni_default_network = ""
	I0108 20:33:57.232770  702522 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:33:57.232778  702522 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:33:57.232788  702522 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:33:57.232795  702522 command_runner.go:130] > # plugin_dirs = [
	I0108 20:33:57.232803  702522 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:33:57.232807  702522 command_runner.go:130] > # ]
	I0108 20:33:57.232814  702522 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:33:57.232819  702522 command_runner.go:130] > [crio.metrics]
	I0108 20:33:57.232825  702522 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:33:57.232835  702522 command_runner.go:130] > # enable_metrics = false
	I0108 20:33:57.232841  702522 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:33:57.232847  702522 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:33:57.232857  702522 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:33:57.232864  702522 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:33:57.232874  702522 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:33:57.232879  702522 command_runner.go:130] > # metrics_collectors = [
	I0108 20:33:57.232886  702522 command_runner.go:130] > # 	"operations",
	I0108 20:33:57.232892  702522 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:33:57.232897  702522 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:33:57.232902  702522 command_runner.go:130] > # 	"operations_errors",
	I0108 20:33:57.232907  702522 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:33:57.232914  702522 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:33:57.232920  702522 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:33:57.232927  702522 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:33:57.232932  702522 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:33:57.232937  702522 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:33:57.232944  702522 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:33:57.232952  702522 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:33:57.232962  702522 command_runner.go:130] > # 	"containers_oom",
	I0108 20:33:57.232967  702522 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:33:57.232972  702522 command_runner.go:130] > # 	"operations_total",
	I0108 20:33:57.232981  702522 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:33:57.232990  702522 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:33:57.232997  702522 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:33:57.233003  702522 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:33:57.233008  702522 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:33:57.233015  702522 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:33:57.233023  702522 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:33:57.233028  702522 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:33:57.233034  702522 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:33:57.233040  702522 command_runner.go:130] > # ]
	I0108 20:33:57.233049  702522 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:33:57.233054  702522 command_runner.go:130] > # metrics_port = 9090
	I0108 20:33:57.233060  702522 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:33:57.233065  702522 command_runner.go:130] > # metrics_socket = ""
	I0108 20:33:57.233076  702522 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:33:57.233086  702522 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:33:57.233094  702522 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:33:57.233103  702522 command_runner.go:130] > # certificate on any modification event.
	I0108 20:33:57.233110  702522 command_runner.go:130] > # metrics_cert = ""
	I0108 20:33:57.233116  702522 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:33:57.233125  702522 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:33:57.233130  702522 command_runner.go:130] > # metrics_key = ""
	I0108 20:33:57.233137  702522 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:33:57.233141  702522 command_runner.go:130] > [crio.tracing]
	I0108 20:33:57.233148  702522 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:33:57.233156  702522 command_runner.go:130] > # enable_tracing = false
	I0108 20:33:57.233165  702522 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:33:57.233171  702522 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:33:57.233177  702522 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:33:57.233186  702522 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:33:57.233195  702522 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:33:57.233208  702522 command_runner.go:130] > [crio.stats]
	I0108 20:33:57.233217  702522 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:33:57.233224  702522 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:33:57.233229  702522 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:33:57.235143  702522 command_runner.go:130] ! time="2024-01-08 20:33:57.223187798Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 20:33:57.235168  702522 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:33:57.235243  702522 cni.go:84] Creating CNI manager for ""
	I0108 20:33:57.235257  702522 cni.go:136] 1 nodes found, recommending kindnet
	I0108 20:33:57.235287  702522 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:33:57.235308  702522 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-933566 NodeName:multinode-933566 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:33:57.235447  702522 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-933566"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:33:57.235522  702522 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-933566 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-933566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:33:57.235590  702522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:33:57.246202  702522 command_runner.go:130] > kubeadm
	I0108 20:33:57.246223  702522 command_runner.go:130] > kubectl
	I0108 20:33:57.246228  702522 command_runner.go:130] > kubelet
	I0108 20:33:57.246265  702522 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:33:57.246342  702522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:33:57.257104  702522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0108 20:33:57.278944  702522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:33:57.300623  702522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0108 20:33:57.322039  702522 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:33:57.326536  702522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:33:57.340334  702522 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566 for IP: 192.168.58.2
	I0108 20:33:57.340370  702522 certs.go:190] acquiring lock for shared ca certs: {Name:mk28124a9f2c671691fce8a4307fb3ec09e97812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:33:57.340554  702522 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key
	I0108 20:33:57.340621  702522 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key
	I0108 20:33:57.340689  702522 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.key
	I0108 20:33:57.340707  702522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.crt with IP's: []
	I0108 20:33:58.441775  702522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.crt ...
	I0108 20:33:58.441807  702522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.crt: {Name:mk5c038cdc8f2d05f0998991192644e623b04084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:33:58.442068  702522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.key ...
	I0108 20:33:58.442089  702522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.key: {Name:mk0ff00fdbe2b5ee6d507224e925f8dfb73b98c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:33:58.442197  702522 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.key.cee25041
	I0108 20:33:58.442218  702522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:33:58.743008  702522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.crt.cee25041 ...
	I0108 20:33:58.743038  702522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.crt.cee25041: {Name:mk834c61bf2252850bc730cb6be2f4612f26fac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:33:58.743234  702522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.key.cee25041 ...
	I0108 20:33:58.743248  702522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.key.cee25041: {Name:mk86899a56ce153d3dbbe6b6b484789fa1634816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:33:58.743343  702522 certs.go:337] copying /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.crt
	I0108 20:33:58.743444  702522 certs.go:341] copying /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.key
	I0108 20:33:58.743500  702522 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.key
	I0108 20:33:58.743521  702522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.crt with IP's: []
	I0108 20:33:59.031653  702522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.crt ...
	I0108 20:33:59.031683  702522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.crt: {Name:mk8591d4db9a253627ef7ceac1a23653b770c69b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:33:59.031870  702522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.key ...
	I0108 20:33:59.031884  702522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.key: {Name:mkf603d124b4d77a79e3b57dba74ecb46a65a82a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:33:59.031961  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 20:33:59.031988  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 20:33:59.032002  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 20:33:59.032019  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 20:33:59.032031  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:33:59.032045  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:33:59.032059  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:33:59.032074  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:33:59.032127  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem (1338 bytes)
	W0108 20:33:59.032166  702522 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732_empty.pem, impossibly tiny 0 bytes
	I0108 20:33:59.032180  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:33:59.032208  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:33:59.032236  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:33:59.032268  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem (1679 bytes)
	I0108 20:33:59.032320  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:33:59.032351  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> /usr/share/ca-certificates/6387322.pem
	I0108 20:33:59.032365  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:33:59.032380  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem -> /usr/share/ca-certificates/638732.pem
	I0108 20:33:59.032948  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:33:59.060564  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 20:33:59.089149  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:33:59.117265  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:33:59.146562  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:33:59.175882  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 20:33:59.204358  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:33:59.232168  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:33:59.260166  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /usr/share/ca-certificates/6387322.pem (1708 bytes)
	I0108 20:33:59.288726  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:33:59.317604  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem --> /usr/share/ca-certificates/638732.pem (1338 bytes)
	I0108 20:33:59.347001  702522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:33:59.367323  702522 ssh_runner.go:195] Run: openssl version
	I0108 20:33:59.376308  702522 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 20:33:59.376386  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6387322.pem && ln -fs /usr/share/ca-certificates/6387322.pem /etc/ssl/certs/6387322.pem"
	I0108 20:33:59.388538  702522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6387322.pem
	I0108 20:33:59.393028  702522 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:18 /usr/share/ca-certificates/6387322.pem
	I0108 20:33:59.393062  702522 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:18 /usr/share/ca-certificates/6387322.pem
	I0108 20:33:59.393111  702522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6387322.pem
	I0108 20:33:59.401200  702522 command_runner.go:130] > 3ec20f2e
	I0108 20:33:59.401624  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6387322.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:33:59.413347  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:33:59.424737  702522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:33:59.429230  702522 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:33:59.429295  702522 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:33:59.429345  702522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:33:59.437863  702522 command_runner.go:130] > b5213941
	I0108 20:33:59.437952  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:33:59.449633  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/638732.pem && ln -fs /usr/share/ca-certificates/638732.pem /etc/ssl/certs/638732.pem"
	I0108 20:33:59.460860  702522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/638732.pem
	I0108 20:33:59.465230  702522 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:18 /usr/share/ca-certificates/638732.pem
	I0108 20:33:59.465448  702522 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:18 /usr/share/ca-certificates/638732.pem
	I0108 20:33:59.465498  702522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/638732.pem
	I0108 20:33:59.473562  702522 command_runner.go:130] > 51391683
	I0108 20:33:59.474009  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/638732.pem /etc/ssl/certs/51391683.0"
	I0108 20:33:59.485231  702522 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:33:59.489540  702522 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:33:59.489577  702522 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:33:59.489633  702522 kubeadm.go:404] StartCluster: {Name:multinode-933566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-933566 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:33:59.489715  702522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:33:59.489774  702522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:33:59.535106  702522 cri.go:89] found id: ""
	I0108 20:33:59.535197  702522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:33:59.545641  702522 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 20:33:59.545666  702522 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 20:33:59.545674  702522 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 20:33:59.545746  702522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:33:59.556202  702522 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 20:33:59.556271  702522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:33:59.566331  702522 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 20:33:59.566363  702522 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 20:33:59.566373  702522 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 20:33:59.566382  702522 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:33:59.566421  702522 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:33:59.566463  702522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 20:33:59.622845  702522 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 20:33:59.622879  702522 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 20:33:59.623215  702522 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:33:59.623258  702522 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 20:33:59.667254  702522 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:33:59.667294  702522 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:33:59.667435  702522 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0108 20:33:59.667444  702522 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0108 20:33:59.667538  702522 kubeadm.go:322] OS: Linux
	I0108 20:33:59.667569  702522 command_runner.go:130] > OS: Linux
	I0108 20:33:59.667636  702522 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 20:33:59.667660  702522 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 20:33:59.667735  702522 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 20:33:59.667746  702522 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 20:33:59.667803  702522 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 20:33:59.667816  702522 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 20:33:59.667874  702522 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 20:33:59.667883  702522 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 20:33:59.667931  702522 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 20:33:59.667944  702522 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 20:33:59.667991  702522 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 20:33:59.668023  702522 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 20:33:59.668104  702522 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 20:33:59.668141  702522 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 20:33:59.668238  702522 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 20:33:59.668275  702522 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 20:33:59.668364  702522 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 20:33:59.668407  702522 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 20:33:59.745183  702522 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:33:59.745245  702522 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:33:59.745404  702522 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:33:59.745431  702522 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:33:59.745573  702522 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:33:59.745599  702522 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:33:59.984536  702522 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:33:59.986869  702522 out.go:204]   - Generating certificates and keys ...
	I0108 20:33:59.984687  702522 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:33:59.987003  702522 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:33:59.987021  702522 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 20:33:59.987085  702522 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:33:59.987096  702522 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 20:34:00.145293  702522 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:34:00.145376  702522 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:34:00.265356  702522 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:34:00.265437  702522 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:34:00.816750  702522 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:34:00.816780  702522 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 20:34:01.376946  702522 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:34:01.376973  702522 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 20:34:01.684548  702522 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:34:01.684578  702522 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 20:34:01.684825  702522 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-933566] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 20:34:01.684842  702522 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-933566] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 20:34:02.365047  702522 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:34:02.365077  702522 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 20:34:02.365328  702522 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-933566] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 20:34:02.365344  702522 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-933566] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 20:34:02.709401  702522 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:34:02.709435  702522 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:34:02.975712  702522 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:34:02.975739  702522 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:34:03.495926  702522 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:34:03.495957  702522 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 20:34:03.496122  702522 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:34:03.496138  702522 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:34:03.862257  702522 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:34:03.862283  702522 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:34:04.080822  702522 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:34:04.080849  702522 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:34:04.400664  702522 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:34:04.400689  702522 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:34:04.869315  702522 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:34:04.869341  702522 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:34:04.870062  702522 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:34:04.870097  702522 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:34:04.874608  702522 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:34:04.878077  702522 out.go:204]   - Booting up control plane ...
	I0108 20:34:04.874711  702522 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:34:04.878180  702522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:34:04.878199  702522 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:34:04.878278  702522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:34:04.878288  702522 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:34:04.878809  702522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:34:04.878827  702522 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:34:04.889763  702522 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:34:04.889793  702522 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:34:04.890802  702522 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:34:04.890821  702522 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:34:04.891033  702522 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:34:04.891042  702522 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:34:04.992646  702522 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:34:04.992673  702522 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:34:12.495746  702522 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503182 seconds
	I0108 20:34:12.495777  702522 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.503182 seconds
	I0108 20:34:12.495877  702522 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:34:12.495883  702522 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:34:12.510476  702522 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:34:12.510501  702522 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:34:13.041362  702522 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:34:13.041387  702522 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:34:13.041559  702522 kubeadm.go:322] [mark-control-plane] Marking the node multinode-933566 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:34:13.041575  702522 command_runner.go:130] > [mark-control-plane] Marking the node multinode-933566 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:34:13.553160  702522 kubeadm.go:322] [bootstrap-token] Using token: ve6qo2.vxryqsdgq51kqr34
	I0108 20:34:13.555644  702522 out.go:204]   - Configuring RBAC rules ...
	I0108 20:34:13.553255  702522 command_runner.go:130] > [bootstrap-token] Using token: ve6qo2.vxryqsdgq51kqr34
	I0108 20:34:13.555783  702522 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:34:13.555801  702522 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:34:13.562156  702522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:34:13.562182  702522 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:34:13.570482  702522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:34:13.570507  702522 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:34:13.575012  702522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:34:13.575051  702522 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:34:13.578587  702522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:34:13.578609  702522 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:34:13.582074  702522 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:34:13.582095  702522 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:34:13.596846  702522 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:34:13.596868  702522 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:34:13.813750  702522 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:34:13.813772  702522 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 20:34:13.998806  702522 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:34:13.998829  702522 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 20:34:13.998835  702522 kubeadm.go:322] 
	I0108 20:34:13.998892  702522 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:34:13.998899  702522 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 20:34:13.998904  702522 kubeadm.go:322] 
	I0108 20:34:13.998976  702522 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:34:13.998981  702522 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 20:34:13.998985  702522 kubeadm.go:322] 
	I0108 20:34:13.999017  702522 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:34:13.999022  702522 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 20:34:13.999076  702522 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:34:13.999081  702522 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:34:13.999128  702522 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:34:13.999132  702522 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:34:13.999136  702522 kubeadm.go:322] 
	I0108 20:34:13.999187  702522 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 20:34:13.999191  702522 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 20:34:13.999195  702522 kubeadm.go:322] 
	I0108 20:34:13.999239  702522 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:34:13.999244  702522 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:34:13.999248  702522 kubeadm.go:322] 
	I0108 20:34:13.999299  702522 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:34:13.999304  702522 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 20:34:13.999374  702522 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:34:13.999379  702522 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:34:13.999443  702522 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:34:13.999450  702522 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:34:13.999454  702522 kubeadm.go:322] 
	I0108 20:34:13.999532  702522 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:34:13.999537  702522 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:34:13.999608  702522 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:34:13.999612  702522 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 20:34:13.999616  702522 kubeadm.go:322] 
	I0108 20:34:13.999695  702522 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ve6qo2.vxryqsdgq51kqr34 \
	I0108 20:34:13.999699  702522 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ve6qo2.vxryqsdgq51kqr34 \
	I0108 20:34:13.999804  702522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a \
	I0108 20:34:13.999810  702522 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a \
	I0108 20:34:13.999829  702522 kubeadm.go:322] 	--control-plane 
	I0108 20:34:13.999833  702522 command_runner.go:130] > 	--control-plane 
	I0108 20:34:13.999840  702522 kubeadm.go:322] 
	I0108 20:34:13.999919  702522 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:34:13.999925  702522 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:34:13.999929  702522 kubeadm.go:322] 
	I0108 20:34:14.000006  702522 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ve6qo2.vxryqsdgq51kqr34 \
	I0108 20:34:14.000011  702522 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ve6qo2.vxryqsdgq51kqr34 \
	I0108 20:34:14.000106  702522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a 
	I0108 20:34:14.000110  702522 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a 
	I0108 20:34:14.003785  702522 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 20:34:14.003859  702522 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 20:34:14.004022  702522 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:34:14.004048  702522 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:34:14.004076  702522 cni.go:84] Creating CNI manager for ""
	I0108 20:34:14.004095  702522 cni.go:136] 1 nodes found, recommending kindnet
	I0108 20:34:14.007355  702522 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:34:14.009394  702522 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:34:14.026498  702522 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:34:14.026573  702522 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0108 20:34:14.026595  702522 command_runner.go:130] > Device: 3ah/58d	Inode: 1572315     Links: 1
	I0108 20:34:14.026618  702522 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:34:14.026653  702522 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0108 20:34:14.026680  702522 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0108 20:34:14.026701  702522 command_runner.go:130] > Change: 2024-01-08 20:10:27.984657575 +0000
	I0108 20:34:14.026724  702522 command_runner.go:130] >  Birth: 2024-01-08 20:10:27.940657342 +0000
	I0108 20:34:14.026941  702522 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:34:14.026954  702522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:34:14.084202  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:34:14.871329  702522 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 20:34:14.877510  702522 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 20:34:14.885672  702522 command_runner.go:130] > serviceaccount/kindnet created
	I0108 20:34:14.900612  702522 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 20:34:14.906646  702522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:34:14.906800  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:14.906880  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-933566 minikube.k8s.io/updated_at=2024_01_08T20_34_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:15.067482  702522 command_runner.go:130] > node/multinode-933566 labeled
	I0108 20:34:15.071296  702522 command_runner.go:130] > -16
	I0108 20:34:15.071339  702522 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 20:34:15.071369  702522 ops.go:34] apiserver oom_adj: -16
	I0108 20:34:15.071446  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:15.203089  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:15.571573  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:15.661727  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:16.072256  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:16.159327  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:16.571924  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:16.658585  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:17.071578  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:17.163501  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:17.571986  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:17.661823  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:18.072383  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:18.165286  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:18.571896  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:18.661314  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:19.071819  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:19.158719  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:19.572062  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:19.663880  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:20.072288  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:20.172601  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:20.572136  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:20.658761  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:21.072096  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:21.160345  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:21.572544  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:21.665291  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:22.071846  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:22.162147  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:22.571594  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:22.657313  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:23.071541  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:23.158017  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:23.572214  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:23.661598  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:24.071952  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:24.170525  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:24.572412  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:24.683677  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:25.072384  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:25.168505  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:25.572286  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:25.660824  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:26.072398  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:26.163907  702522 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:34:26.572132  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:34:26.692275  702522 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 20:34:26.692294  702522 command_runner.go:130] > default   0         0s
	I0108 20:34:26.695911  702522 kubeadm.go:1088] duration metric: took 11.789157668s to wait for elevateKubeSystemPrivileges.
	I0108 20:34:26.695944  702522 kubeadm.go:406] StartCluster complete in 27.206316799s
	I0108 20:34:26.695965  702522 settings.go:142] acquiring lock: {Name:mk63cb8f057d0d432df7260ff815cc6f0354f468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:34:26.696024  702522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:34:26.696670  702522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-633350/kubeconfig: {Name:mk2f931b682c68dbcf44ed887f090aab8cb1a7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:34:26.697157  702522 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:34:26.697413  702522 kapi.go:59] client config for multinode-933566: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.key", CAFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:34:26.698289  702522 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:34:26.698310  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:26.698319  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:26.698332  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:26.698818  702522 config.go:182] Loaded profile config "multinode-933566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:34:26.698873  702522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:34:26.698967  702522 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:34:26.699032  702522 addons.go:69] Setting storage-provisioner=true in profile "multinode-933566"
	I0108 20:34:26.699053  702522 addons.go:237] Setting addon storage-provisioner=true in "multinode-933566"
	I0108 20:34:26.699107  702522 host.go:66] Checking if "multinode-933566" exists ...
	I0108 20:34:26.699566  702522 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Status}}
	I0108 20:34:26.699756  702522 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 20:34:26.700245  702522 addons.go:69] Setting default-storageclass=true in profile "multinode-933566"
	I0108 20:34:26.700270  702522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-933566"
	I0108 20:34:26.700543  702522 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Status}}
	I0108 20:34:26.723086  702522 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0108 20:34:26.723117  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:26.723140  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:26.723150  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:26.723157  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:26.723163  702522 round_trippers.go:580]     Content-Length: 291
	I0108 20:34:26.723169  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:26 GMT
	I0108 20:34:26.723176  702522 round_trippers.go:580]     Audit-Id: 6c7f3201-0891-45f1-b2bd-dc03997bc9af
	I0108 20:34:26.723187  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:26.723221  702522 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3819ad3-831c-4511-bd60-afe254a308f4","resourceVersion":"369","creationTimestamp":"2024-01-08T20:34:13Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 20:34:26.723831  702522 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3819ad3-831c-4511-bd60-afe254a308f4","resourceVersion":"369","creationTimestamp":"2024-01-08T20:34:13Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 20:34:26.723906  702522 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:34:26.723922  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:26.723931  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:26.723938  702522 round_trippers.go:473]     Content-Type: application/json
	I0108 20:34:26.723945  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:26.773475  702522 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0108 20:34:26.773507  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:26.773522  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:26.773529  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:26.773535  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:26.773546  702522 round_trippers.go:580]     Content-Length: 291
	I0108 20:34:26.773552  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:26 GMT
	I0108 20:34:26.773558  702522 round_trippers.go:580]     Audit-Id: 7e9a3f38-ab05-4460-a59e-b6e27f1ac65f
	I0108 20:34:26.773564  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:26.780369  702522 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3819ad3-831c-4511-bd60-afe254a308f4","resourceVersion":"390","creationTimestamp":"2024-01-08T20:34:13Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 20:34:26.789893  702522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:34:26.791965  702522 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:34:26.791995  702522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:34:26.792080  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:34:26.794894  702522 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:34:26.795176  702522 kapi.go:59] client config for multinode-933566: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.key", CAFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:34:26.795456  702522 addons.go:237] Setting addon default-storageclass=true in "multinode-933566"
	I0108 20:34:26.795492  702522 host.go:66] Checking if "multinode-933566" exists ...
	I0108 20:34:26.795952  702522 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Status}}
	I0108 20:34:26.839912  702522 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:34:26.839936  702522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:34:26.840009  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:34:26.849306  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:34:26.880433  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:34:26.984108  702522 command_runner.go:130] > apiVersion: v1
	I0108 20:34:26.984133  702522 command_runner.go:130] > data:
	I0108 20:34:26.984140  702522 command_runner.go:130] >   Corefile: |
	I0108 20:34:26.984144  702522 command_runner.go:130] >     .:53 {
	I0108 20:34:26.984149  702522 command_runner.go:130] >         errors
	I0108 20:34:26.984155  702522 command_runner.go:130] >         health {
	I0108 20:34:26.984160  702522 command_runner.go:130] >            lameduck 5s
	I0108 20:34:26.984165  702522 command_runner.go:130] >         }
	I0108 20:34:26.984170  702522 command_runner.go:130] >         ready
	I0108 20:34:26.984177  702522 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 20:34:26.984182  702522 command_runner.go:130] >            pods insecure
	I0108 20:34:26.984191  702522 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 20:34:26.984197  702522 command_runner.go:130] >            ttl 30
	I0108 20:34:26.984201  702522 command_runner.go:130] >         }
	I0108 20:34:26.984207  702522 command_runner.go:130] >         prometheus :9153
	I0108 20:34:26.984212  702522 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 20:34:26.984218  702522 command_runner.go:130] >            max_concurrent 1000
	I0108 20:34:26.984222  702522 command_runner.go:130] >         }
	I0108 20:34:26.984228  702522 command_runner.go:130] >         cache 30
	I0108 20:34:26.984232  702522 command_runner.go:130] >         loop
	I0108 20:34:26.984237  702522 command_runner.go:130] >         reload
	I0108 20:34:26.984242  702522 command_runner.go:130] >         loadbalance
	I0108 20:34:26.984246  702522 command_runner.go:130] >     }
	I0108 20:34:26.984251  702522 command_runner.go:130] > kind: ConfigMap
	I0108 20:34:26.984256  702522 command_runner.go:130] > metadata:
	I0108 20:34:26.984263  702522 command_runner.go:130] >   creationTimestamp: "2024-01-08T20:34:13Z"
	I0108 20:34:26.984268  702522 command_runner.go:130] >   name: coredns
	I0108 20:34:26.984273  702522 command_runner.go:130] >   namespace: kube-system
	I0108 20:34:26.984278  702522 command_runner.go:130] >   resourceVersion: "267"
	I0108 20:34:26.984285  702522 command_runner.go:130] >   uid: 5578b3af-bdf5-43be-a627-e1472fd1080c
	I0108 20:34:26.985586  702522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:34:27.032831  702522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:34:27.112194  702522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:34:27.198681  702522 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:34:27.198760  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:27.198784  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:27.198808  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:27.203648  702522 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:34:27.203722  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:27.203744  702522 round_trippers.go:580]     Audit-Id: 2a93d91c-86e8-469a-b192-6e8c74af5835
	I0108 20:34:27.203778  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:27.203813  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:27.203832  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:27.203878  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:27.203909  702522 round_trippers.go:580]     Content-Length: 291
	I0108 20:34:27.203958  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:27 GMT
	I0108 20:34:27.204000  702522 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3819ad3-831c-4511-bd60-afe254a308f4","resourceVersion":"403","creationTimestamp":"2024-01-08T20:34:13Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 20:34:27.204142  702522 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-933566" context rescaled to 1 replicas
	I0108 20:34:27.204213  702522 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:34:27.206564  702522 out.go:177] * Verifying Kubernetes components...
	I0108 20:34:27.208892  702522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:34:27.655399  702522 command_runner.go:130] > configmap/coredns replaced
	I0108 20:34:27.661252  702522 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0108 20:34:27.730937  702522 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 20:34:27.737263  702522 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 20:34:27.747314  702522 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 20:34:27.756784  702522 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 20:34:27.768156  702522 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 20:34:27.800398  702522 command_runner.go:130] > pod/storage-provisioner created
	I0108 20:34:27.806321  702522 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 20:34:27.806470  702522 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 20:34:27.806484  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:27.806494  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:27.806501  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:27.806958  702522 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:34:27.807225  702522 kapi.go:59] client config for multinode-933566: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.key", CAFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:34:27.807499  702522 node_ready.go:35] waiting up to 6m0s for node "multinode-933566" to be "Ready" ...
	I0108 20:34:27.807583  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:27.807595  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:27.807603  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:27.807610  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:27.813330  702522 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 20:34:27.813351  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:27.813370  702522 round_trippers.go:580]     Audit-Id: c25f018d-446a-4a6a-bb3a-8246c603f902
	I0108 20:34:27.813379  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:27.813388  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:27.813397  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:27.813404  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:27.813415  702522 round_trippers.go:580]     Content-Length: 1273
	I0108 20:34:27.813421  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:27 GMT
	I0108 20:34:27.814570  702522 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 20:34:27.814590  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:27.814598  702522 round_trippers.go:580]     Audit-Id: 359192c6-56f6-476f-9a7f-f1066cee0cec
	I0108 20:34:27.814605  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:27.814611  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:27.814617  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:27.814626  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:27.814632  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:27 GMT
	I0108 20:34:27.814765  702522 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"c744d6f0-42d4-462b-a6c7-a1ec6901e8e0","resourceVersion":"404","creationTimestamp":"2024-01-08T20:34:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:34:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 20:34:27.815291  702522 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c744d6f0-42d4-462b-a6c7-a1ec6901e8e0","resourceVersion":"404","creationTimestamp":"2024-01-08T20:34:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:34:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 20:34:27.815350  702522 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 20:34:27.815362  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:27.815371  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:27.815383  702522 round_trippers.go:473]     Content-Type: application/json
	I0108 20:34:27.815390  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:27.816508  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:27.823682  702522 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 20:34:27.823706  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:27.823715  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:27.823721  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:27.823728  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:27.823735  702522 round_trippers.go:580]     Content-Length: 1220
	I0108 20:34:27.823747  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:27 GMT
	I0108 20:34:27.823757  702522 round_trippers.go:580]     Audit-Id: 82b43635-baaf-40bc-bce5-7efd40c9564f
	I0108 20:34:27.823763  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:27.827128  702522 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c744d6f0-42d4-462b-a6c7-a1ec6901e8e0","resourceVersion":"404","creationTimestamp":"2024-01-08T20:34:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:34:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 20:34:27.831114  702522 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 20:34:27.833108  702522 addons.go:508] enable addons completed in 1.134133711s: enabled=[storage-provisioner default-storageclass]
	I0108 20:34:28.308146  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:28.308168  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:28.308178  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:28.308186  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:28.310607  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:28.310632  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:28.310641  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:28.310648  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:28 GMT
	I0108 20:34:28.310664  702522 round_trippers.go:580]     Audit-Id: b7198462-c463-4d46-abf2-896a631d3b69
	I0108 20:34:28.310676  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:28.310683  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:28.310715  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:28.310929  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:28.808587  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:28.808614  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:28.808624  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:28.808631  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:28.811067  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:28.811091  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:28.811100  702522 round_trippers.go:580]     Audit-Id: 9c8959d6-ff19-4cb2-9ec8-37bebbcdaa9c
	I0108 20:34:28.811109  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:28.811115  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:28.811122  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:28.811128  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:28.811138  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:28 GMT
	I0108 20:34:28.811391  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:29.308504  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:29.308529  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:29.308539  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:29.308546  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:29.311026  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:29.311057  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:29.311067  702522 round_trippers.go:580]     Audit-Id: 7301d7c3-5bf9-4d23-adcf-8863bd67d9b6
	I0108 20:34:29.311084  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:29.311095  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:29.311103  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:29.311111  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:29.311122  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:29 GMT
	I0108 20:34:29.311501  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:29.808231  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:29.808259  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:29.808269  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:29.808276  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:29.810758  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:29.810780  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:29.810793  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:29.810799  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:29.810805  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:29 GMT
	I0108 20:34:29.810812  702522 round_trippers.go:580]     Audit-Id: bff0bd73-7024-4012-ae49-a96f36645351
	I0108 20:34:29.810821  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:29.810833  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:29.811046  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:29.811453  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:30.308096  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:30.308123  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:30.308133  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:30.308142  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:30.310608  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:30.310674  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:30.310688  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:30.310695  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:30 GMT
	I0108 20:34:30.310716  702522 round_trippers.go:580]     Audit-Id: e70dce05-1c7a-4460-baf4-bbf28a32176a
	I0108 20:34:30.310734  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:30.310741  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:30.310748  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:30.310903  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:30.807905  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:30.807924  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:30.807934  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:30.807942  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:30.810360  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:30.810380  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:30.810387  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:30.810394  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:30.810400  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:30.810406  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:30.810412  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:30 GMT
	I0108 20:34:30.810419  702522 round_trippers.go:580]     Audit-Id: df484a1f-0625-4b48-acb3-5ad320c6c920
	I0108 20:34:30.810546  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:31.308222  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:31.308248  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:31.308258  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:31.308265  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:31.310593  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:31.310618  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:31.310626  702522 round_trippers.go:580]     Audit-Id: 3664d505-db97-4b53-a892-4c282c75965f
	I0108 20:34:31.310633  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:31.310640  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:31.310646  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:31.310654  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:31.310664  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:31 GMT
	I0108 20:34:31.311023  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:31.808502  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:31.808526  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:31.808535  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:31.808542  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:31.811102  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:31.811132  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:31.811141  702522 round_trippers.go:580]     Audit-Id: 3d306502-73e3-41bf-b97c-b0e3b1c31e0e
	I0108 20:34:31.811147  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:31.811154  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:31.811160  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:31.811166  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:31.811175  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:31 GMT
	I0108 20:34:31.811301  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:31.811698  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:32.308243  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:32.308269  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:32.308317  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:32.308328  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:32.310859  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:32.310884  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:32.310893  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:32.310900  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:32 GMT
	I0108 20:34:32.310908  702522 round_trippers.go:580]     Audit-Id: f39b6e58-071d-421f-81ab-89b07a05e671
	I0108 20:34:32.310914  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:32.310921  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:32.310927  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:32.311049  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:32.808146  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:32.808170  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:32.808183  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:32.808190  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:32.810686  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:32.810712  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:32.810720  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:32.810727  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:32.810733  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:32.810784  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:32.810800  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:32 GMT
	I0108 20:34:32.810808  702522 round_trippers.go:580]     Audit-Id: 7c7dab10-2d9d-410e-9c67-313bcd7d2b70
	I0108 20:34:32.810929  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:33.308506  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:33.308530  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:33.308540  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:33.308547  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:33.311439  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:33.311462  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:33.311470  702522 round_trippers.go:580]     Audit-Id: 34760712-7444-4721-a187-b859e6fbe5b4
	I0108 20:34:33.311477  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:33.311483  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:33.311490  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:33.311500  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:33.311507  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:33 GMT
	I0108 20:34:33.311623  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:33.808605  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:33.808628  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:33.808651  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:33.808659  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:33.811126  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:33.811152  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:33.811161  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:33.811167  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:33 GMT
	I0108 20:34:33.811174  702522 round_trippers.go:580]     Audit-Id: d99a4844-ba25-422c-9846-0e140b8fab01
	I0108 20:34:33.811180  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:33.811187  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:33.811193  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:33.811310  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:33.811736  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:34.308223  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:34.308248  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:34.308259  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:34.308267  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:34.310575  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:34.310597  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:34.310606  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:34 GMT
	I0108 20:34:34.310613  702522 round_trippers.go:580]     Audit-Id: 689e7b34-6f9a-4482-936b-10ff3d1a67da
	I0108 20:34:34.310619  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:34.310630  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:34.310641  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:34.310653  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:34.311112  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:34.808406  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:34.808431  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:34.808441  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:34.808449  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:34.810785  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:34.810809  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:34.810818  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:34.810825  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:34.810831  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:34 GMT
	I0108 20:34:34.810838  702522 round_trippers.go:580]     Audit-Id: d8f51bfd-cc0a-4b45-ba26-bc75cb5d9651
	I0108 20:34:34.810844  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:34.810851  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:34.811116  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:35.308644  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:35.308669  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:35.308679  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:35.308686  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:35.311033  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:35.311055  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:35.311064  702522 round_trippers.go:580]     Audit-Id: d687048f-07ab-4205-a0e6-bedd7ee5e7cc
	I0108 20:34:35.311070  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:35.311077  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:35.311083  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:35.311093  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:35.311103  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:35 GMT
	I0108 20:34:35.311538  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:35.808620  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:35.808644  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:35.808654  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:35.808661  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:35.811046  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:35.811071  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:35.811080  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:35.811087  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:35 GMT
	I0108 20:34:35.811093  702522 round_trippers.go:580]     Audit-Id: 86c9bc64-8203-422b-91d5-40d91ed05c88
	I0108 20:34:35.811100  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:35.811109  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:35.811125  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:35.811396  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:35.811805  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:36.308321  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:36.308346  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:36.308356  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:36.308364  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:36.310797  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:36.310820  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:36.310828  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:36.310835  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:36.310841  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:36.310848  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:36 GMT
	I0108 20:34:36.310855  702522 round_trippers.go:580]     Audit-Id: 66a7369b-1ddb-422c-a475-28b341de7224
	I0108 20:34:36.310861  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:36.310989  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:36.807902  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:36.807928  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:36.807938  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:36.807946  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:36.810241  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:36.810259  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:36.810267  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:36.810273  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:36.810279  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:36 GMT
	I0108 20:34:36.810285  702522 round_trippers.go:580]     Audit-Id: 49a9cd75-b817-48e8-8db2-86c1f876615e
	I0108 20:34:36.810291  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:36.810298  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:36.810427  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:37.308436  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:37.308457  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:37.308467  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:37.308474  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:37.310972  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:37.311000  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:37.311009  702522 round_trippers.go:580]     Audit-Id: c92bfa73-3f18-4b63-aef7-1b3a1d80aa10
	I0108 20:34:37.311016  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:37.311027  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:37.311033  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:37.311039  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:37.311050  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:37 GMT
	I0108 20:34:37.311444  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:37.807737  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:37.807767  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:37.807778  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:37.807785  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:37.810284  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:37.810335  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:37.810366  702522 round_trippers.go:580]     Audit-Id: 865bc627-b464-45e6-925c-f28722cf3b2c
	I0108 20:34:37.810381  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:37.810388  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:37.810395  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:37.810401  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:37.810408  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:37 GMT
	I0108 20:34:37.810524  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:38.307845  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:38.307869  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:38.307879  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:38.307886  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:38.310401  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:38.310425  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:38.310460  702522 round_trippers.go:580]     Audit-Id: e01438ed-67e8-43f7-a0b7-96359c0f1c75
	I0108 20:34:38.310469  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:38.310475  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:38.310482  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:38.310488  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:38.310494  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:38 GMT
	I0108 20:34:38.310893  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:38.311323  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:38.808180  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:38.808204  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:38.808213  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:38.808220  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:38.810566  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:38.810608  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:38.810617  702522 round_trippers.go:580]     Audit-Id: 1eb9e916-ac42-4d75-9807-594411eddea3
	I0108 20:34:38.810623  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:38.810632  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:38.810639  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:38.810648  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:38.810655  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:38 GMT
	I0108 20:34:38.810841  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:39.307900  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:39.307925  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:39.307935  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:39.307942  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:39.310517  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:39.310542  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:39.310551  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:39.310558  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:39.310564  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:39 GMT
	I0108 20:34:39.310570  702522 round_trippers.go:580]     Audit-Id: 706eccdf-38ee-44d5-b205-936194a6f717
	I0108 20:34:39.310577  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:39.310583  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:39.310719  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:39.808654  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:39.808681  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:39.808692  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:39.808699  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:39.811092  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:39.811116  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:39.811124  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:39.811131  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:39.811137  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:39.811144  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:39 GMT
	I0108 20:34:39.811150  702522 round_trippers.go:580]     Audit-Id: ff95dd90-36e3-45dd-94a8-75277d273e0c
	I0108 20:34:39.811161  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:39.811510  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:40.308616  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:40.308649  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:40.308660  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:40.308667  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:40.311159  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:40.311180  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:40.311189  702522 round_trippers.go:580]     Audit-Id: 6bb3d84a-5037-44d9-ab9c-09e3bf02bd7e
	I0108 20:34:40.311195  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:40.311202  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:40.311209  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:40.311216  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:40.311223  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:40 GMT
	I0108 20:34:40.311371  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:40.311824  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:40.808263  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:40.808285  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:40.808295  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:40.808302  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:40.810768  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:40.810790  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:40.810799  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:40.810806  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:40.810812  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:40.810818  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:40.810825  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:40 GMT
	I0108 20:34:40.810836  702522 round_trippers.go:580]     Audit-Id: 71f84e10-f27d-4167-97bc-289c0cb8bdbb
	I0108 20:34:40.811168  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:41.308495  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:41.308521  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:41.308531  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:41.308538  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:41.311007  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:41.311029  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:41.311037  702522 round_trippers.go:580]     Audit-Id: ce65933c-6b88-4fce-ac7f-5d55ed251f20
	I0108 20:34:41.311044  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:41.311050  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:41.311056  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:41.311062  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:41.311071  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:41 GMT
	I0108 20:34:41.311273  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:41.808244  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:41.808267  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:41.808277  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:41.808284  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:41.810663  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:41.810686  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:41.810694  702522 round_trippers.go:580]     Audit-Id: 4b95641f-c5e9-4be9-89bb-d3b8c7317507
	I0108 20:34:41.810702  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:41.810708  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:41.810714  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:41.810724  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:41.810731  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:41 GMT
	I0108 20:34:41.810894  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:42.307881  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:42.307907  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:42.307917  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:42.307925  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:42.310534  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:42.310559  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:42.310568  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:42.310575  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:42.310581  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:42.310588  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:42.310594  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:42 GMT
	I0108 20:34:42.310601  702522 round_trippers.go:580]     Audit-Id: 93eef8a7-7b59-4001-9bb6-fb7384fe23ab
	I0108 20:34:42.310938  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:42.808601  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:42.808626  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:42.808636  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:42.808644  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:42.811104  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:42.811123  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:42.811132  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:42 GMT
	I0108 20:34:42.811138  702522 round_trippers.go:580]     Audit-Id: ce51cba4-38b4-451d-a4b0-c399eac7b070
	I0108 20:34:42.811144  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:42.811150  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:42.811156  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:42.811162  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:42.811256  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:42.811644  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:43.308037  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:43.308066  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:43.308076  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:43.308084  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:43.310687  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:43.310711  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:43.310721  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:43.310728  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:43 GMT
	I0108 20:34:43.310734  702522 round_trippers.go:580]     Audit-Id: 52a200b9-d482-4842-940b-b0fcee632b61
	I0108 20:34:43.310740  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:43.310747  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:43.310756  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:43.310887  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:43.807740  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:43.807762  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:43.807771  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:43.807779  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:43.810282  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:43.810309  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:43.810318  702522 round_trippers.go:580]     Audit-Id: e9161f02-9d87-413c-ad41-bdf22b8d79f7
	I0108 20:34:43.810325  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:43.810331  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:43.810337  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:43.810344  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:43.810354  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:43 GMT
	I0108 20:34:43.810478  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:44.308595  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:44.308623  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:44.308633  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:44.308641  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:44.311393  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:44.311418  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:44.311426  702522 round_trippers.go:580]     Audit-Id: a723e202-d0e1-45b0-a182-1b7a0b72db47
	I0108 20:34:44.311433  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:44.311440  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:44.311447  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:44.311454  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:44.311461  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:44 GMT
	I0108 20:34:44.311629  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:44.807923  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:44.807944  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:44.807956  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:44.807963  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:44.810967  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:44.810987  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:44.810996  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:44 GMT
	I0108 20:34:44.811002  702522 round_trippers.go:580]     Audit-Id: d06376ea-84f2-4af5-9793-49a6c2bb9366
	I0108 20:34:44.811009  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:44.811015  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:44.811021  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:44.811028  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:44.811467  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:44.811878  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:45.308858  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:45.308884  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:45.308894  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:45.308901  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:45.312169  702522 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:34:45.312251  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:45.312305  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:45.312315  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:45 GMT
	I0108 20:34:45.312321  702522 round_trippers.go:580]     Audit-Id: c9be4adf-adba-459a-828e-8136ca067d51
	I0108 20:34:45.312327  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:45.312333  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:45.312340  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:45.312529  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:45.808313  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:45.808336  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:45.808346  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:45.808353  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:45.810817  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:45.810842  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:45.810851  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:45.810857  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:45.810864  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:45.810871  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:45 GMT
	I0108 20:34:45.810880  702522 round_trippers.go:580]     Audit-Id: b9b35294-ee24-46ad-9203-4476f140473e
	I0108 20:34:45.810887  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:45.811145  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:46.308603  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:46.308627  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:46.308637  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:46.308644  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:46.311040  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:46.311064  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:46.311073  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:46.311080  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:46.311086  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:46 GMT
	I0108 20:34:46.311092  702522 round_trippers.go:580]     Audit-Id: de018bb0-5396-4ead-9102-670ba4a67ee2
	I0108 20:34:46.311098  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:46.311105  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:46.311219  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:46.808357  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:46.808381  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:46.808392  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:46.808400  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:46.810989  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:46.811015  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:46.811027  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:46.811034  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:46.811041  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:46 GMT
	I0108 20:34:46.811047  702522 round_trippers.go:580]     Audit-Id: e71f7653-4216-4ebf-bd61-e9a06c726b67
	I0108 20:34:46.811054  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:46.811060  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:46.811153  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:47.308222  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:47.308246  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:47.308256  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:47.308263  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:47.310700  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:47.310722  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:47.310730  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:47 GMT
	I0108 20:34:47.310737  702522 round_trippers.go:580]     Audit-Id: 53f27f06-564d-48fc-8b69-d17c07683ce3
	I0108 20:34:47.310743  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:47.310749  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:47.310755  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:47.310762  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:47.310889  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:47.311283  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:47.807743  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:47.807766  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:47.807776  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:47.807783  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:47.810242  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:47.810261  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:47.810270  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:47 GMT
	I0108 20:34:47.810277  702522 round_trippers.go:580]     Audit-Id: 913ff354-5200-4f27-9f2e-1b5a235acea9
	I0108 20:34:47.810283  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:47.810290  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:47.810296  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:47.810302  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:47.810411  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:48.308415  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:48.308437  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:48.308448  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:48.308455  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:48.310798  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:48.310824  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:48.310833  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:48.310839  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:48.310845  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:48.310852  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:48.310864  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:48 GMT
	I0108 20:34:48.310870  702522 round_trippers.go:580]     Audit-Id: 3d1e4900-f7d5-47f1-a40c-9976947199e0
	I0108 20:34:48.311110  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:48.807844  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:48.807871  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:48.807886  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:48.807894  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:48.810323  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:48.810342  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:48.810351  702522 round_trippers.go:580]     Audit-Id: abc3e957-4003-4fe7-82ae-9f89aa6bcdc6
	I0108 20:34:48.810357  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:48.810363  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:48.810370  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:48.810376  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:48.810383  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:48 GMT
	I0108 20:34:48.810516  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:49.308675  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:49.308698  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:49.308708  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:49.308716  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:49.311415  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:49.311437  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:49.311445  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:49.311452  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:49.311459  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:49.311465  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:49 GMT
	I0108 20:34:49.311471  702522 round_trippers.go:580]     Audit-Id: e017208e-4179-4478-8ea6-857e22035d25
	I0108 20:34:49.311478  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:49.311595  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:49.311999  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:49.808549  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:49.808572  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:49.808582  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:49.808590  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:49.811053  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:49.811078  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:49.811086  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:49 GMT
	I0108 20:34:49.811093  702522 round_trippers.go:580]     Audit-Id: f6df8a2f-f98c-4914-8023-9ec661c58ad5
	I0108 20:34:49.811099  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:49.811106  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:49.811112  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:49.811120  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:49.811359  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:50.308488  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:50.308513  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:50.308523  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:50.308531  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:50.311009  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:50.311029  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:50.311037  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:50.311044  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:50.311050  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:50.311056  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:50.311063  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:50 GMT
	I0108 20:34:50.311071  702522 round_trippers.go:580]     Audit-Id: c9941331-dc73-4928-8336-6824212a4610
	I0108 20:34:50.311235  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:50.808645  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:50.808668  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:50.808679  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:50.808686  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:50.811217  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:50.811237  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:50.811246  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:50 GMT
	I0108 20:34:50.811252  702522 round_trippers.go:580]     Audit-Id: ae5eafc0-b559-4f43-9835-f64d660e4998
	I0108 20:34:50.811259  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:50.811265  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:50.811271  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:50.811278  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:50.811369  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:51.307765  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:51.307788  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:51.307797  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:51.307804  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:51.310511  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:51.310535  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:51.310544  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:51.310551  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:51.310557  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:51.310563  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:51.310569  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:51 GMT
	I0108 20:34:51.310583  702522 round_trippers.go:580]     Audit-Id: b23cadeb-4527-41bd-ac8e-e967be8e69eb
	I0108 20:34:51.310707  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:51.808443  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:51.808468  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:51.808478  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:51.808486  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:51.811083  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:51.811106  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:51.811114  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:51 GMT
	I0108 20:34:51.811121  702522 round_trippers.go:580]     Audit-Id: 2a499d18-82c2-42ec-ad98-c4f95e4f3de5
	I0108 20:34:51.811127  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:51.811133  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:51.811140  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:51.811146  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:51.811251  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:51.811664  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:52.308238  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:52.308265  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:52.308276  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:52.308286  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:52.310756  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:52.310780  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:52.310788  702522 round_trippers.go:580]     Audit-Id: cd2462a8-88c3-4ede-b1fc-a0f4d9fd19e8
	I0108 20:34:52.310795  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:52.310801  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:52.310808  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:52.310815  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:52.310821  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:52 GMT
	I0108 20:34:52.310996  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:52.807796  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:52.807821  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:52.807882  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:52.807895  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:52.810356  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:52.810375  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:52.810383  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:52.810389  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:52.810395  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:52.810402  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:52.810408  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:52 GMT
	I0108 20:34:52.810414  702522 round_trippers.go:580]     Audit-Id: be2807d5-9f9c-44b4-874c-27f4bc658874
	I0108 20:34:52.810655  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:53.308081  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:53.308105  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:53.308115  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:53.308122  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:53.310538  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:53.310557  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:53.310565  702522 round_trippers.go:580]     Audit-Id: 68a47cdc-faa9-4e9f-8970-e7fd23326e80
	I0108 20:34:53.310571  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:53.310577  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:53.310584  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:53.310596  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:53.310603  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:53 GMT
	I0108 20:34:53.310948  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:53.808592  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:53.808617  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:53.808628  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:53.808636  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:53.811279  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:53.811299  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:53.811307  702522 round_trippers.go:580]     Audit-Id: 54903adc-e63d-41e9-8ddb-e6610dd3093f
	I0108 20:34:53.811314  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:53.811319  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:53.811326  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:53.811333  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:53.811339  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:53 GMT
	I0108 20:34:53.811502  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:53.811905  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:54.308613  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:54.308636  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:54.308646  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:54.308653  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:54.311055  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:54.311074  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:54.311083  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:54.311089  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:54.311095  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:54.311102  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:54.311108  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:54 GMT
	I0108 20:34:54.311115  702522 round_trippers.go:580]     Audit-Id: b94ece6a-874a-401b-95c8-6c260dcd4b1c
	I0108 20:34:54.311243  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:54.808585  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:54.808613  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:54.808623  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:54.808630  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:54.811012  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:54.811037  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:54.811046  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:54.811054  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:54 GMT
	I0108 20:34:54.811060  702522 round_trippers.go:580]     Audit-Id: 56937849-c741-4bdd-9163-c3386fd03e66
	I0108 20:34:54.811067  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:54.811073  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:54.811081  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:54.811181  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:55.308193  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:55.308218  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:55.308227  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:55.308235  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:55.310719  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:55.310747  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:55.310756  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:55.310766  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:55 GMT
	I0108 20:34:55.310772  702522 round_trippers.go:580]     Audit-Id: e8cea6e1-ec62-4381-a845-8c6e517af29e
	I0108 20:34:55.310778  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:55.310785  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:55.310798  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:55.310928  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:55.807712  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:55.807744  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:55.807755  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:55.807762  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:55.810181  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:55.810208  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:55.810217  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:55.810224  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:55.810230  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:55.810237  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:55 GMT
	I0108 20:34:55.810247  702522 round_trippers.go:580]     Audit-Id: ea4da8fe-3f55-478d-aeb1-b5e9a74beb8c
	I0108 20:34:55.810254  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:55.810591  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:56.308234  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:56.308257  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:56.308266  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:56.308275  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:56.310785  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:56.310805  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:56.310813  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:56.310820  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:56.310826  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:56 GMT
	I0108 20:34:56.310833  702522 round_trippers.go:580]     Audit-Id: ae70dfd3-9323-482b-9cb7-c049892cfb2d
	I0108 20:34:56.310841  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:56.310848  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:56.311007  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:56.311402  702522 node_ready.go:58] node "multinode-933566" has status "Ready":"False"
	I0108 20:34:56.807937  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:56.807958  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:56.807967  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:56.807975  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:56.810454  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:56.810474  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:56.810483  702522 round_trippers.go:580]     Audit-Id: 17d80878-c9b0-4a11-a27d-917b1774d950
	I0108 20:34:56.810489  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:56.810495  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:56.810501  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:56.810508  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:56.810514  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:56 GMT
	I0108 20:34:56.810619  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:57.307767  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:57.307788  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:57.307798  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:57.307806  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:57.310159  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:57.310182  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:57.310191  702522 round_trippers.go:580]     Audit-Id: 2eb07bed-876a-4fbb-9ab9-c234e3242cf4
	I0108 20:34:57.310198  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:57.310204  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:57.310210  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:57.310217  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:57.310224  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:57 GMT
	I0108 20:34:57.310343  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:57.808557  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:57.808584  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:57.808594  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:57.808601  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:57.810950  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:57.810975  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:57.810983  702522 round_trippers.go:580]     Audit-Id: b6fba85f-27ea-4433-b8a8-0a5d8dcf047b
	I0108 20:34:57.810990  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:57.810996  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:57.811002  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:57.811008  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:57.811015  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:57 GMT
	I0108 20:34:57.811335  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"345","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 20:34:58.308439  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:58.308466  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:58.308476  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:58.308483  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:58.326541  702522 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0108 20:34:58.326570  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:58.326580  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:58.326587  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:58.326634  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:58.326647  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:58.326654  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:58 GMT
	I0108 20:34:58.326661  702522 round_trippers.go:580]     Audit-Id: ee42f779-96f4-43d1-84bb-484981c6c6e3
	I0108 20:34:58.327588  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:58.327988  702522 node_ready.go:49] node "multinode-933566" has status "Ready":"True"
	I0108 20:34:58.328036  702522 node_ready.go:38] duration metric: took 30.520518383s waiting for node "multinode-933566" to be "Ready" ...
	I0108 20:34:58.328055  702522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:34:58.328132  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:34:58.328144  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:58.328153  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:58.328164  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:58.333410  702522 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:34:58.333440  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:58.333450  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:58.333457  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:58 GMT
	I0108 20:34:58.333463  702522 round_trippers.go:580]     Audit-Id: d0488212-207a-421c-99d7-68e0ed4c93d6
	I0108 20:34:58.333469  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:58.333481  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:58.333488  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:58.335207  702522 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2945x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"dfb7da4b-0626-4cd3-accf-49736fec486b","resourceVersion":"442","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"72dec1d4-3d8b-4eb1-86df-aa1268e266be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72dec1d4-3d8b-4eb1-86df-aa1268e266be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0108 20:34:58.339427  702522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2945x" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:58.339563  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2945x
	I0108 20:34:58.339575  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:58.339585  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:58.339592  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:58.346916  702522 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 20:34:58.346952  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:58.346961  702522 round_trippers.go:580]     Audit-Id: e8fa50ee-1c28-48fe-aeb5-7b49214fd418
	I0108 20:34:58.346968  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:58.346975  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:58.346982  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:58.346988  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:58.347001  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:58 GMT
	I0108 20:34:58.348261  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2945x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"dfb7da4b-0626-4cd3-accf-49736fec486b","resourceVersion":"442","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"72dec1d4-3d8b-4eb1-86df-aa1268e266be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72dec1d4-3d8b-4eb1-86df-aa1268e266be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 20:34:58.348800  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:58.348818  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:58.348827  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:58.348834  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:58.352037  702522 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:34:58.352056  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:58.352064  702522 round_trippers.go:580]     Audit-Id: 3c5e190a-160c-4477-961a-ea7ce7d509f1
	I0108 20:34:58.352071  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:58.352078  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:58.352094  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:58.352101  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:58.352112  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:58 GMT
	I0108 20:34:58.353783  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:58.840090  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2945x
	I0108 20:34:58.840115  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:58.840125  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:58.840134  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:58.842640  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:58.842663  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:58.842679  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:58.842686  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:58.842693  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:58.842700  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:58 GMT
	I0108 20:34:58.842709  702522 round_trippers.go:580]     Audit-Id: 7a95df84-a97f-4393-a16d-cc0a07f06545
	I0108 20:34:58.842718  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:58.843137  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2945x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"dfb7da4b-0626-4cd3-accf-49736fec486b","resourceVersion":"442","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"72dec1d4-3d8b-4eb1-86df-aa1268e266be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72dec1d4-3d8b-4eb1-86df-aa1268e266be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 20:34:58.843697  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:58.843712  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:58.843726  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:58.843734  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:58.845864  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:58.845881  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:58.845888  702522 round_trippers.go:580]     Audit-Id: 9e58c86d-c4e0-4a07-87dd-4c392414b4f4
	I0108 20:34:58.845894  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:58.845901  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:58.845910  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:58.845916  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:58.845923  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:58 GMT
	I0108 20:34:58.846046  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:59.340027  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2945x
	I0108 20:34:59.340047  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.340057  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.340064  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.342561  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.342582  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.342590  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.342614  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.342628  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.342634  702522 round_trippers.go:580]     Audit-Id: 45242c92-02c6-4981-8144-28d784c5e156
	I0108 20:34:59.342641  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.342650  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.343085  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2945x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"dfb7da4b-0626-4cd3-accf-49736fec486b","resourceVersion":"455","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"72dec1d4-3d8b-4eb1-86df-aa1268e266be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72dec1d4-3d8b-4eb1-86df-aa1268e266be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 20:34:59.343668  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:59.343685  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.343695  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.343702  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.345813  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.345829  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.345837  702522 round_trippers.go:580]     Audit-Id: 8c64e9bd-d241-4422-8443-f318b9345c90
	I0108 20:34:59.345844  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.345850  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.345855  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.345862  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.345868  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.346039  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:59.346406  702522 pod_ready.go:92] pod "coredns-5dd5756b68-2945x" in "kube-system" namespace has status "Ready":"True"
	I0108 20:34:59.346417  702522 pod_ready.go:81] duration metric: took 1.006963793s waiting for pod "coredns-5dd5756b68-2945x" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.346426  702522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.346513  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-933566
	I0108 20:34:59.346520  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.346527  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.346534  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.348616  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.348633  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.348641  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.348647  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.348654  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.348660  702522 round_trippers.go:580]     Audit-Id: 852332ab-58b5-4243-8eb7-5d844ba8f2cd
	I0108 20:34:59.348666  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.348672  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.348782  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-933566","namespace":"kube-system","uid":"c53171eb-8f57-4639-9d22-203811cf58f2","resourceVersion":"424","creationTimestamp":"2024-01-08T20:34:13Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"497df64d2b22876329916355928b85ab","kubernetes.io/config.mirror":"497df64d2b22876329916355928b85ab","kubernetes.io/config.seen":"2024-01-08T20:34:06.010525199Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 20:34:59.349194  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:59.349202  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.349210  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.349216  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.351274  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.351328  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.351349  702522 round_trippers.go:580]     Audit-Id: f67b24d5-65f4-4bb3-b6e6-edb1c8aff06f
	I0108 20:34:59.351372  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.351401  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.351430  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.351451  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.351471  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.351636  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:59.352015  702522 pod_ready.go:92] pod "etcd-multinode-933566" in "kube-system" namespace has status "Ready":"True"
	I0108 20:34:59.352033  702522 pod_ready.go:81] duration metric: took 5.600518ms waiting for pod "etcd-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.352046  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.352110  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-933566
	I0108 20:34:59.352120  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.352127  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.352134  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.354224  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.354283  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.354305  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.354327  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.354360  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.354389  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.354411  702522 round_trippers.go:580]     Audit-Id: 0ea2b648-8edd-45d7-adea-18c1d9d53bd0
	I0108 20:34:59.354425  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.354581  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-933566","namespace":"kube-system","uid":"6a1a3eb3-8722-42af-89c7-99e38dd67209","resourceVersion":"425","creationTimestamp":"2024-01-08T20:34:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7cce4a89903b4d928f46e67897916f84","kubernetes.io/config.mirror":"7cce4a89903b4d928f46e67897916f84","kubernetes.io/config.seen":"2024-01-08T20:34:13.891794977Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 20:34:59.355124  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:59.355138  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.355146  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.355153  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.357172  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.357218  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.357253  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.357265  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.357272  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.357287  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.357300  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.357310  702522 round_trippers.go:580]     Audit-Id: d42ddb8e-09f2-4a5a-94d0-d0e3e0ccb74c
	I0108 20:34:59.357436  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:59.357798  702522 pod_ready.go:92] pod "kube-apiserver-multinode-933566" in "kube-system" namespace has status "Ready":"True"
	I0108 20:34:59.357813  702522 pod_ready.go:81] duration metric: took 5.754908ms waiting for pod "kube-apiserver-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.357823  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.357880  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-933566
	I0108 20:34:59.357890  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.357898  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.357905  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.360091  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.360113  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.360125  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.360132  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.360164  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.360177  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.360183  702522 round_trippers.go:580]     Audit-Id: 2ee060e6-355d-41d8-8948-cdbc49406383
	I0108 20:34:59.360189  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.360372  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-933566","namespace":"kube-system","uid":"1138bd89-ec87-4e01-8763-875d458d57a2","resourceVersion":"426","creationTimestamp":"2024-01-08T20:34:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92a9565c2579701599da39a047c04246","kubernetes.io/config.mirror":"92a9565c2579701599da39a047c04246","kubernetes.io/config.seen":"2024-01-08T20:34:13.891800893Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 20:34:59.360866  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:59.360882  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.360890  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.360896  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.362857  702522 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:34:59.362877  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.362886  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.362892  702522 round_trippers.go:580]     Audit-Id: 52f428ab-8dd4-43aa-a8a3-0dc682244f8e
	I0108 20:34:59.362898  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.362904  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.362911  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.362918  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.363249  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:59.363630  702522 pod_ready.go:92] pod "kube-controller-manager-multinode-933566" in "kube-system" namespace has status "Ready":"True"
	I0108 20:34:59.363649  702522 pod_ready.go:81] duration metric: took 5.813525ms waiting for pod "kube-controller-manager-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.363665  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lljgl" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.363759  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lljgl
	I0108 20:34:59.363771  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.363778  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.363785  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.365844  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.365885  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.365920  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.365947  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.365961  702522 round_trippers.go:580]     Audit-Id: 1a82c0a1-4608-4c69-b3c0-2a292e21e96d
	I0108 20:34:59.365968  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.365975  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.365981  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.366120  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lljgl","generateName":"kube-proxy-","namespace":"kube-system","uid":"7c0d75bb-8b31-4b55-8972-26dc6c5debb7","resourceVersion":"420","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d86a78ab-417d-4a21-a1de-e0a57cc46b17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d86a78ab-417d-4a21-a1de-e0a57cc46b17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 20:34:59.508861  702522 request.go:629] Waited for 142.26411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:59.508949  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:59.508959  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.508967  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.508974  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.511429  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.511492  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.511513  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.511525  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.511532  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.511538  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.511545  702522 round_trippers.go:580]     Audit-Id: 744d8a84-c7b0-4606-bb63-418a28decf11
	I0108 20:34:59.511570  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.511710  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:59.512129  702522 pod_ready.go:92] pod "kube-proxy-lljgl" in "kube-system" namespace has status "Ready":"True"
	I0108 20:34:59.512146  702522 pod_ready.go:81] duration metric: took 148.471042ms waiting for pod "kube-proxy-lljgl" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.512160  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.708988  702522 request.go:629] Waited for 196.745988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-933566
	I0108 20:34:59.709079  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-933566
	I0108 20:34:59.709093  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.709102  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.709109  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.711554  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.711615  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.711636  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.711658  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.711670  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.711692  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.711700  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.711724  702522 round_trippers.go:580]     Audit-Id: 0d1f85f8-1e88-4060-af19-9e020fb1fb98
	I0108 20:34:59.711864  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-933566","namespace":"kube-system","uid":"8806e2fe-d851-40d7-84bb-c2e96df92fc8","resourceVersion":"427","creationTimestamp":"2024-01-08T20:34:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b9d9df99a15b95ebb51416e26bd0091a","kubernetes.io/config.mirror":"b9d9df99a15b95ebb51416e26bd0091a","kubernetes.io/config.seen":"2024-01-08T20:34:13.891802386Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 20:34:59.908540  702522 request.go:629] Waited for 196.237381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:59.908631  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:34:59.908640  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.908650  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.908657  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.911090  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:34:59.911144  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.911153  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.911162  702522 round_trippers.go:580]     Audit-Id: e2eff6f8-e6fa-4497-a02d-290b697a7211
	I0108 20:34:59.911174  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.911181  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.911194  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.911205  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.911315  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:34:59.911726  702522 pod_ready.go:92] pod "kube-scheduler-multinode-933566" in "kube-system" namespace has status "Ready":"True"
	I0108 20:34:59.911745  702522 pod_ready.go:81] duration metric: took 399.575092ms waiting for pod "kube-scheduler-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:34:59.911757  702522 pod_ready.go:38] duration metric: took 1.583688545s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:34:59.911775  702522 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:34:59.911844  702522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:34:59.923224  702522 command_runner.go:130] > 1273
	I0108 20:34:59.924491  702522 api_server.go:72] duration metric: took 32.720204747s to wait for apiserver process to appear ...
	I0108 20:34:59.924513  702522 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:34:59.924535  702522 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 20:34:59.933259  702522 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 20:34:59.933342  702522 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0108 20:34:59.933356  702522 round_trippers.go:469] Request Headers:
	I0108 20:34:59.933366  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:34:59.933373  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:34:59.934696  702522 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:34:59.934722  702522 round_trippers.go:577] Response Headers:
	I0108 20:34:59.934731  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:34:59.934738  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:34:59.934745  702522 round_trippers.go:580]     Content-Length: 264
	I0108 20:34:59.934777  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:34:59 GMT
	I0108 20:34:59.934788  702522 round_trippers.go:580]     Audit-Id: de13bd05-b07a-4b4e-a901-e98f0b4aa290
	I0108 20:34:59.934794  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:34:59.934803  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:34:59.934834  702522 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0108 20:34:59.934916  702522 api_server.go:141] control plane version: v1.28.4
	I0108 20:34:59.934934  702522 api_server.go:131] duration metric: took 10.414297ms to wait for apiserver health ...
	I0108 20:34:59.934942  702522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:35:00.109388  702522 request.go:629] Waited for 174.337721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:35:00.109468  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:35:00.109505  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:00.109517  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:00.109528  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:00.113772  702522 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:35:00.113849  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:00.113875  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:00.113899  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:00 GMT
	I0108 20:35:00.113935  702522 round_trippers.go:580]     Audit-Id: 2a28b238-07a9-45a6-8710-ef069d13d01a
	I0108 20:35:00.113951  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:00.113959  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:00.113965  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:00.114518  702522 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2945x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"dfb7da4b-0626-4cd3-accf-49736fec486b","resourceVersion":"455","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"72dec1d4-3d8b-4eb1-86df-aa1268e266be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72dec1d4-3d8b-4eb1-86df-aa1268e266be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0108 20:35:00.117040  702522 system_pods.go:59] 8 kube-system pods found
	I0108 20:35:00.117082  702522 system_pods.go:61] "coredns-5dd5756b68-2945x" [dfb7da4b-0626-4cd3-accf-49736fec486b] Running
	I0108 20:35:00.117091  702522 system_pods.go:61] "etcd-multinode-933566" [c53171eb-8f57-4639-9d22-203811cf58f2] Running
	I0108 20:35:00.117099  702522 system_pods.go:61] "kindnet-7wmrt" [a3c01562-5fb4-40e3-81c7-53be33365b5e] Running
	I0108 20:35:00.117109  702522 system_pods.go:61] "kube-apiserver-multinode-933566" [6a1a3eb3-8722-42af-89c7-99e38dd67209] Running
	I0108 20:35:00.117117  702522 system_pods.go:61] "kube-controller-manager-multinode-933566" [1138bd89-ec87-4e01-8763-875d458d57a2] Running
	I0108 20:35:00.117123  702522 system_pods.go:61] "kube-proxy-lljgl" [7c0d75bb-8b31-4b55-8972-26dc6c5debb7] Running
	I0108 20:35:00.117131  702522 system_pods.go:61] "kube-scheduler-multinode-933566" [8806e2fe-d851-40d7-84bb-c2e96df92fc8] Running
	I0108 20:35:00.117145  702522 system_pods.go:61] "storage-provisioner" [b11f34a8-ca65-4977-b11c-a1d51dcb66e6] Running
	I0108 20:35:00.117161  702522 system_pods.go:74] duration metric: took 182.212614ms to wait for pod list to return data ...
	I0108 20:35:00.117170  702522 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:35:00.308529  702522 request.go:629] Waited for 191.240405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:35:00.308587  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:35:00.308611  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:00.308631  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:00.308639  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:00.311226  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:00.311252  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:00.311261  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:00.311267  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:00.311274  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:00.311281  702522 round_trippers.go:580]     Content-Length: 261
	I0108 20:35:00.311287  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:00 GMT
	I0108 20:35:00.311323  702522 round_trippers.go:580]     Audit-Id: 08c56610-164c-4f30-9b05-7481ac68a3b1
	I0108 20:35:00.311337  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:00.311362  702522 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b2d6b65d-3998-450d-b399-356465636763","resourceVersion":"348","creationTimestamp":"2024-01-08T20:34:26Z"}}]}
	I0108 20:35:00.311615  702522 default_sa.go:45] found service account: "default"
	I0108 20:35:00.311635  702522 default_sa.go:55] duration metric: took 194.452101ms for default service account to be created ...
	I0108 20:35:00.311645  702522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:35:00.509007  702522 request.go:629] Waited for 197.296213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:35:00.509154  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:35:00.509168  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:00.509178  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:00.509196  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:00.512793  702522 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:35:00.512821  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:00.512830  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:00 GMT
	I0108 20:35:00.512836  702522 round_trippers.go:580]     Audit-Id: 150070b6-4866-4350-bb36-344f4fb0fea6
	I0108 20:35:00.512843  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:00.512849  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:00.512855  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:00.512866  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:00.513482  702522 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2945x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"dfb7da4b-0626-4cd3-accf-49736fec486b","resourceVersion":"455","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"72dec1d4-3d8b-4eb1-86df-aa1268e266be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72dec1d4-3d8b-4eb1-86df-aa1268e266be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0108 20:35:00.515936  702522 system_pods.go:86] 8 kube-system pods found
	I0108 20:35:00.515964  702522 system_pods.go:89] "coredns-5dd5756b68-2945x" [dfb7da4b-0626-4cd3-accf-49736fec486b] Running
	I0108 20:35:00.515972  702522 system_pods.go:89] "etcd-multinode-933566" [c53171eb-8f57-4639-9d22-203811cf58f2] Running
	I0108 20:35:00.515979  702522 system_pods.go:89] "kindnet-7wmrt" [a3c01562-5fb4-40e3-81c7-53be33365b5e] Running
	I0108 20:35:00.515989  702522 system_pods.go:89] "kube-apiserver-multinode-933566" [6a1a3eb3-8722-42af-89c7-99e38dd67209] Running
	I0108 20:35:00.515999  702522 system_pods.go:89] "kube-controller-manager-multinode-933566" [1138bd89-ec87-4e01-8763-875d458d57a2] Running
	I0108 20:35:00.516004  702522 system_pods.go:89] "kube-proxy-lljgl" [7c0d75bb-8b31-4b55-8972-26dc6c5debb7] Running
	I0108 20:35:00.516009  702522 system_pods.go:89] "kube-scheduler-multinode-933566" [8806e2fe-d851-40d7-84bb-c2e96df92fc8] Running
	I0108 20:35:00.516015  702522 system_pods.go:89] "storage-provisioner" [b11f34a8-ca65-4977-b11c-a1d51dcb66e6] Running
	I0108 20:35:00.516025  702522 system_pods.go:126] duration metric: took 204.370738ms to wait for k8s-apps to be running ...
	I0108 20:35:00.516033  702522 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:35:00.516099  702522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:35:00.530285  702522 system_svc.go:56] duration metric: took 14.241417ms WaitForService to wait for kubelet.
	I0108 20:35:00.530351  702522 kubeadm.go:581] duration metric: took 33.326069134s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:35:00.530370  702522 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:35:00.708699  702522 request.go:629] Waited for 178.260349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 20:35:00.708786  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 20:35:00.708797  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:00.708807  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:00.708820  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:00.711331  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:00.711357  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:00.711366  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:00.711372  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:00.711379  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:00.711386  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:00 GMT
	I0108 20:35:00.711393  702522 round_trippers.go:580]     Audit-Id: 50181ebd-40c5-4811-8642-58c667dedbb8
	I0108 20:35:00.711400  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:00.711505  702522 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0108 20:35:00.711990  702522 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:35:00.712015  702522 node_conditions.go:123] node cpu capacity is 2
	I0108 20:35:00.712029  702522 node_conditions.go:105] duration metric: took 181.653544ms to run NodePressure ...
	I0108 20:35:00.712044  702522 start.go:228] waiting for startup goroutines ...
	I0108 20:35:00.712055  702522 start.go:233] waiting for cluster config update ...
	I0108 20:35:00.712064  702522 start.go:242] writing updated cluster config ...
	I0108 20:35:00.714717  702522 out.go:177] 
	I0108 20:35:00.716804  702522 config.go:182] Loaded profile config "multinode-933566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:35:00.716905  702522 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/config.json ...
	I0108 20:35:00.719353  702522 out.go:177] * Starting worker node multinode-933566-m02 in cluster multinode-933566
	I0108 20:35:00.722011  702522 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:35:00.724078  702522 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:35:00.726092  702522 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:35:00.726119  702522 cache.go:56] Caching tarball of preloaded images
	I0108 20:35:00.726135  702522 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:35:00.726211  702522 preload.go:174] Found /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0108 20:35:00.726222  702522 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:35:00.726310  702522 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/config.json ...
	I0108 20:35:00.745845  702522 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:35:00.745873  702522 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:35:00.745895  702522 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:35:00.745927  702522 start.go:365] acquiring machines lock for multinode-933566-m02: {Name:mk745fa34e0f3c1b8d2130e88ce93c7d2aeedef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:35:00.746211  702522 start.go:369] acquired machines lock for "multinode-933566-m02" in 250.841µs
	I0108 20:35:00.746242  702522 start.go:93] Provisioning new machine with config: &{Name:multinode-933566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-933566 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:35:00.746328  702522 start.go:125] createHost starting for "m02" (driver="docker")
	I0108 20:35:00.748811  702522 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 20:35:00.748970  702522 start.go:159] libmachine.API.Create for "multinode-933566" (driver="docker")
	I0108 20:35:00.748993  702522 client.go:168] LocalClient.Create starting
	I0108 20:35:00.749070  702522 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem
	I0108 20:35:00.749112  702522 main.go:141] libmachine: Decoding PEM data...
	I0108 20:35:00.749132  702522 main.go:141] libmachine: Parsing certificate...
	I0108 20:35:00.749193  702522 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem
	I0108 20:35:00.749215  702522 main.go:141] libmachine: Decoding PEM data...
	I0108 20:35:00.749227  702522 main.go:141] libmachine: Parsing certificate...
	I0108 20:35:00.749465  702522 cli_runner.go:164] Run: docker network inspect multinode-933566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:35:00.767498  702522 network_create.go:77] Found existing network {name:multinode-933566 subnet:0x400293d740 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0108 20:35:00.767553  702522 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-933566-m02" container
	I0108 20:35:00.767641  702522 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:35:00.784866  702522 cli_runner.go:164] Run: docker volume create multinode-933566-m02 --label name.minikube.sigs.k8s.io=multinode-933566-m02 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:35:00.803566  702522 oci.go:103] Successfully created a docker volume multinode-933566-m02
	I0108 20:35:00.803658  702522 cli_runner.go:164] Run: docker run --rm --name multinode-933566-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-933566-m02 --entrypoint /usr/bin/test -v multinode-933566-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:35:01.373658  702522 oci.go:107] Successfully prepared a docker volume multinode-933566-m02
	I0108 20:35:01.373700  702522 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:35:01.373722  702522 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:35:01.373813  702522 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-933566-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:35:05.656642  702522 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-933566-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.282777551s)
	I0108 20:35:05.656680  702522 kic.go:203] duration metric: took 4.282956 seconds to extract preloaded images to volume
	W0108 20:35:05.656815  702522 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:35:05.656927  702522 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:35:05.724682  702522 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-933566-m02 --name multinode-933566-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-933566-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-933566-m02 --network multinode-933566 --ip 192.168.58.3 --volume multinode-933566-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:35:06.091915  702522 cli_runner.go:164] Run: docker container inspect multinode-933566-m02 --format={{.State.Running}}
	I0108 20:35:06.114632  702522 cli_runner.go:164] Run: docker container inspect multinode-933566-m02 --format={{.State.Status}}
	I0108 20:35:06.136044  702522 cli_runner.go:164] Run: docker exec multinode-933566-m02 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:35:06.232823  702522 oci.go:144] the created container "multinode-933566-m02" has a running status.
	I0108 20:35:06.232852  702522 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa...
	I0108 20:35:07.180620  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 20:35:07.180668  702522 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:35:07.216066  702522 cli_runner.go:164] Run: docker container inspect multinode-933566-m02 --format={{.State.Status}}
	I0108 20:35:07.239318  702522 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:35:07.239341  702522 kic_runner.go:114] Args: [docker exec --privileged multinode-933566-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:35:07.306604  702522 cli_runner.go:164] Run: docker container inspect multinode-933566-m02 --format={{.State.Status}}
	I0108 20:35:07.329798  702522 machine.go:88] provisioning docker machine ...
	I0108 20:35:07.329834  702522 ubuntu.go:169] provisioning hostname "multinode-933566-m02"
	I0108 20:35:07.329907  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:35:07.355304  702522 main.go:141] libmachine: Using SSH client type: native
	I0108 20:35:07.355715  702522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I0108 20:35:07.355738  702522 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-933566-m02 && echo "multinode-933566-m02" | sudo tee /etc/hostname
	I0108 20:35:07.520331  702522 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-933566-m02
	
	I0108 20:35:07.520488  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:35:07.542649  702522 main.go:141] libmachine: Using SSH client type: native
	I0108 20:35:07.543059  702522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I0108 20:35:07.543080  702522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-933566-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-933566-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-933566-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:35:07.691763  702522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:35:07.691786  702522 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-633350/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-633350/.minikube}
	I0108 20:35:07.691810  702522 ubuntu.go:177] setting up certificates
	I0108 20:35:07.691820  702522 provision.go:83] configureAuth start
	I0108 20:35:07.691885  702522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933566-m02
	I0108 20:35:07.710935  702522 provision.go:138] copyHostCerts
	I0108 20:35:07.710980  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:35:07.711013  702522 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem, removing ...
	I0108 20:35:07.711020  702522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:35:07.711098  702522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem (1082 bytes)
	I0108 20:35:07.711171  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:35:07.711188  702522 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem, removing ...
	I0108 20:35:07.711193  702522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:35:07.711217  702522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem (1123 bytes)
	I0108 20:35:07.711282  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:35:07.711297  702522 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem, removing ...
	I0108 20:35:07.711301  702522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:35:07.711326  702522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem (1679 bytes)
	I0108 20:35:07.711368  702522 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem org=jenkins.multinode-933566-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-933566-m02]
	I0108 20:35:07.970923  702522 provision.go:172] copyRemoteCerts
	I0108 20:35:07.970998  702522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:35:07.971046  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:35:07.991993  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa Username:docker}
	I0108 20:35:08.093611  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:35:08.093674  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:35:08.124490  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:35:08.124582  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:35:08.155169  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:35:08.155238  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 20:35:08.187189  702522 provision.go:86] duration metric: configureAuth took 495.355264ms
	I0108 20:35:08.187221  702522 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:35:08.187431  702522 config.go:182] Loaded profile config "multinode-933566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:35:08.187547  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:35:08.206033  702522 main.go:141] libmachine: Using SSH client type: native
	I0108 20:35:08.206496  702522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I0108 20:35:08.206516  702522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:35:08.469213  702522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:35:08.469241  702522 machine.go:91] provisioned docker machine in 1.139422838s
	I0108 20:35:08.469251  702522 client.go:171] LocalClient.Create took 7.720252241s
	I0108 20:35:08.469264  702522 start.go:167] duration metric: libmachine.API.Create for "multinode-933566" took 7.720295392s
	I0108 20:35:08.469273  702522 start.go:300] post-start starting for "multinode-933566-m02" (driver="docker")
	I0108 20:35:08.469282  702522 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:35:08.469354  702522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:35:08.469401  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:35:08.488012  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa Username:docker}
	I0108 20:35:08.589435  702522 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:35:08.593605  702522 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 20:35:08.593667  702522 command_runner.go:130] > NAME="Ubuntu"
	I0108 20:35:08.593689  702522 command_runner.go:130] > VERSION_ID="22.04"
	I0108 20:35:08.593711  702522 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 20:35:08.593725  702522 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 20:35:08.593731  702522 command_runner.go:130] > ID=ubuntu
	I0108 20:35:08.593736  702522 command_runner.go:130] > ID_LIKE=debian
	I0108 20:35:08.593744  702522 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 20:35:08.593750  702522 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 20:35:08.593758  702522 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 20:35:08.593766  702522 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 20:35:08.593776  702522 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 20:35:08.593827  702522 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:35:08.593866  702522 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:35:08.593881  702522 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:35:08.593888  702522 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:35:08.593900  702522 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/addons for local assets ...
	I0108 20:35:08.593961  702522 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/files for local assets ...
	I0108 20:35:08.594044  702522 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> 6387322.pem in /etc/ssl/certs
	I0108 20:35:08.594054  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> /etc/ssl/certs/6387322.pem
	I0108 20:35:08.594154  702522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:35:08.605137  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:35:08.635685  702522 start.go:303] post-start completed in 166.396612ms
	I0108 20:35:08.636067  702522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933566-m02
	I0108 20:35:08.653991  702522 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/config.json ...
	I0108 20:35:08.654276  702522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:35:08.654410  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:35:08.672144  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa Username:docker}
	I0108 20:35:08.768301  702522 command_runner.go:130] > 15%!
	(MISSING)I0108 20:35:08.768869  702522 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:35:08.774858  702522 command_runner.go:130] > 167G
	I0108 20:35:08.774924  702522 start.go:128] duration metric: createHost completed in 8.028583763s
	I0108 20:35:08.774950  702522 start.go:83] releasing machines lock for "multinode-933566-m02", held for 8.028725721s
	I0108 20:35:08.775027  702522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933566-m02
	I0108 20:35:08.796140  702522 out.go:177] * Found network options:
	I0108 20:35:08.798136  702522 out.go:177]   - NO_PROXY=192.168.58.2
	W0108 20:35:08.800117  702522 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 20:35:08.800160  702522 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 20:35:08.800233  702522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:35:08.800279  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:35:08.800580  702522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:35:08.800642  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:35:08.820178  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa Username:docker}
	I0108 20:35:08.831882  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa Username:docker}
	I0108 20:35:09.088673  702522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:35:09.088758  702522 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:35:09.094165  702522 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 20:35:09.094190  702522 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 20:35:09.094197  702522 command_runner.go:130] > Device: b3h/179d	Inode: 1568593     Links: 1
	I0108 20:35:09.094205  702522 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:35:09.094213  702522 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 20:35:09.094219  702522 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 20:35:09.094225  702522 command_runner.go:130] > Change: 2024-01-08 20:10:27.324654076 +0000
	I0108 20:35:09.094231  702522 command_runner.go:130] >  Birth: 2024-01-08 20:10:27.324654076 +0000
	I0108 20:35:09.094625  702522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:35:09.121282  702522 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:35:09.121406  702522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:35:09.169867  702522 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 20:35:09.170087  702522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:35:09.170105  702522 start.go:475] detecting cgroup driver to use...
	I0108 20:35:09.170138  702522 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:35:09.170195  702522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:35:09.189932  702522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:35:09.203976  702522 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:35:09.204067  702522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:35:09.221033  702522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:35:09.238621  702522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:35:09.338529  702522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:35:09.437430  702522 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 20:35:09.437472  702522 docker.go:233] disabling docker service ...
	I0108 20:35:09.437525  702522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:35:09.459137  702522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:35:09.472924  702522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:35:09.571739  702522 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 20:35:09.571869  702522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:35:09.674458  702522 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 20:35:09.674539  702522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:35:09.688203  702522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:35:09.711022  702522 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:35:09.712531  702522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:35:09.712640  702522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:35:09.725554  702522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:35:09.725664  702522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:35:09.738170  702522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:35:09.751470  702522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:35:09.764280  702522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:35:09.776559  702522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:35:09.785536  702522 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 20:35:09.786673  702522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:35:09.796761  702522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:35:09.887038  702522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:35:10.030291  702522 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:35:10.030413  702522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:35:10.035633  702522 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:35:10.035697  702522 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:35:10.035725  702522 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I0108 20:35:10.035747  702522 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:35:10.035766  702522 command_runner.go:130] > Access: 2024-01-08 20:35:10.012454062 +0000
	I0108 20:35:10.035788  702522 command_runner.go:130] > Modify: 2024-01-08 20:35:10.012454062 +0000
	I0108 20:35:10.035809  702522 command_runner.go:130] > Change: 2024-01-08 20:35:10.012454062 +0000
	I0108 20:35:10.035830  702522 command_runner.go:130] >  Birth: -
	I0108 20:35:10.036090  702522 start.go:543] Will wait 60s for crictl version
	I0108 20:35:10.036176  702522 ssh_runner.go:195] Run: which crictl
	I0108 20:35:10.040360  702522 command_runner.go:130] > /usr/bin/crictl
	I0108 20:35:10.040858  702522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:35:10.085587  702522 command_runner.go:130] > Version:  0.1.0
	I0108 20:35:10.085827  702522 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:35:10.086002  702522 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 20:35:10.086192  702522 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:35:10.089306  702522 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 20:35:10.089458  702522 ssh_runner.go:195] Run: crio --version
	I0108 20:35:10.132339  702522 command_runner.go:130] > crio version 1.24.6
	I0108 20:35:10.132398  702522 command_runner.go:130] > Version:          1.24.6
	I0108 20:35:10.132421  702522 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 20:35:10.132443  702522 command_runner.go:130] > GitTreeState:     clean
	I0108 20:35:10.132486  702522 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 20:35:10.132514  702522 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 20:35:10.132536  702522 command_runner.go:130] > Compiler:         gc
	I0108 20:35:10.132557  702522 command_runner.go:130] > Platform:         linux/arm64
	I0108 20:35:10.132578  702522 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:35:10.132610  702522 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:35:10.132630  702522 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:35:10.132652  702522 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:35:10.134909  702522 ssh_runner.go:195] Run: crio --version
	I0108 20:35:10.180056  702522 command_runner.go:130] > crio version 1.24.6
	I0108 20:35:10.180114  702522 command_runner.go:130] > Version:          1.24.6
	I0108 20:35:10.180138  702522 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 20:35:10.180158  702522 command_runner.go:130] > GitTreeState:     clean
	I0108 20:35:10.180180  702522 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 20:35:10.180202  702522 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 20:35:10.180231  702522 command_runner.go:130] > Compiler:         gc
	I0108 20:35:10.180250  702522 command_runner.go:130] > Platform:         linux/arm64
	I0108 20:35:10.180269  702522 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:35:10.180293  702522 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:35:10.180321  702522 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:35:10.180341  702522 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:35:10.185339  702522 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 20:35:10.187678  702522 out.go:177]   - env NO_PROXY=192.168.58.2
	I0108 20:35:10.189764  702522 cli_runner.go:164] Run: docker network inspect multinode-933566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:35:10.207469  702522 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 20:35:10.212509  702522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:35:10.226637  702522 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566 for IP: 192.168.58.3
	I0108 20:35:10.226690  702522 certs.go:190] acquiring lock for shared ca certs: {Name:mk28124a9f2c671691fce8a4307fb3ec09e97812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:35:10.226831  702522 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key
	I0108 20:35:10.226870  702522 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key
	I0108 20:35:10.226880  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:35:10.226893  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:35:10.226905  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:35:10.226915  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:35:10.226976  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem (1338 bytes)
	W0108 20:35:10.227005  702522 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732_empty.pem, impossibly tiny 0 bytes
	I0108 20:35:10.227015  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:35:10.227039  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:35:10.227066  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:35:10.227088  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem (1679 bytes)
	I0108 20:35:10.227133  702522 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:35:10.227160  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> /usr/share/ca-certificates/6387322.pem
	I0108 20:35:10.227172  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:35:10.227182  702522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem -> /usr/share/ca-certificates/638732.pem
	I0108 20:35:10.227518  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:35:10.256438  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 20:35:10.285532  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:35:10.314537  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:35:10.343519  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /usr/share/ca-certificates/6387322.pem (1708 bytes)
	I0108 20:35:10.372745  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:35:10.400965  702522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/638732.pem --> /usr/share/ca-certificates/638732.pem (1338 bytes)
	I0108 20:35:10.431311  702522 ssh_runner.go:195] Run: openssl version
	I0108 20:35:10.437985  702522 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 20:35:10.438571  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/638732.pem && ln -fs /usr/share/ca-certificates/638732.pem /etc/ssl/certs/638732.pem"
	I0108 20:35:10.450677  702522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/638732.pem
	I0108 20:35:10.455445  702522 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:18 /usr/share/ca-certificates/638732.pem
	I0108 20:35:10.455509  702522 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:18 /usr/share/ca-certificates/638732.pem
	I0108 20:35:10.455567  702522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/638732.pem
	I0108 20:35:10.463960  702522 command_runner.go:130] > 51391683
	I0108 20:35:10.464422  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/638732.pem /etc/ssl/certs/51391683.0"
	I0108 20:35:10.476240  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6387322.pem && ln -fs /usr/share/ca-certificates/6387322.pem /etc/ssl/certs/6387322.pem"
	I0108 20:35:10.488873  702522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6387322.pem
	I0108 20:35:10.494553  702522 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:18 /usr/share/ca-certificates/6387322.pem
	I0108 20:35:10.494828  702522 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:18 /usr/share/ca-certificates/6387322.pem
	I0108 20:35:10.494912  702522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6387322.pem
	I0108 20:35:10.504525  702522 command_runner.go:130] > 3ec20f2e
	I0108 20:35:10.505330  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6387322.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:35:10.517398  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:35:10.529875  702522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:35:10.539018  702522 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:35:10.539387  702522 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:35:10.539475  702522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:35:10.548434  702522 command_runner.go:130] > b5213941
	I0108 20:35:10.548887  702522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:35:10.560731  702522 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:35:10.565032  702522 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:35:10.565327  702522 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:35:10.565430  702522 ssh_runner.go:195] Run: crio config
	I0108 20:35:10.618194  702522 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:35:10.618280  702522 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:35:10.618317  702522 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:35:10.618342  702522 command_runner.go:130] > #
	I0108 20:35:10.618367  702522 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:35:10.618403  702522 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:35:10.618429  702522 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:35:10.618507  702522 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:35:10.618530  702522 command_runner.go:130] > # reload'.
	I0108 20:35:10.618552  702522 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:35:10.618590  702522 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:35:10.618614  702522 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:35:10.618653  702522 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:35:10.618693  702522 command_runner.go:130] > [crio]
	I0108 20:35:10.618726  702522 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:35:10.618750  702522 command_runner.go:130] > # containers images, in this directory.
	I0108 20:35:10.618774  702522 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 20:35:10.618811  702522 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:35:10.618843  702522 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 20:35:10.618866  702522 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:35:10.618901  702522 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:35:10.618922  702522 command_runner.go:130] > # storage_driver = "vfs"
	I0108 20:35:10.618944  702522 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:35:10.618975  702522 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:35:10.618997  702522 command_runner.go:130] > # storage_option = [
	I0108 20:35:10.619016  702522 command_runner.go:130] > # ]
	I0108 20:35:10.619051  702522 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:35:10.619076  702522 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:35:10.619094  702522 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:35:10.619128  702522 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:35:10.619153  702522 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:35:10.619172  702522 command_runner.go:130] > # always happen on a node reboot
	I0108 20:35:10.619205  702522 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:35:10.619230  702522 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:35:10.619249  702522 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:35:10.619287  702522 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:35:10.619311  702522 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:35:10.619333  702522 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:35:10.619369  702522 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:35:10.619392  702522 command_runner.go:130] > # internal_wipe = true
	I0108 20:35:10.619413  702522 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:35:10.619448  702522 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:35:10.619473  702522 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:35:10.619494  702522 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:35:10.619529  702522 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:35:10.619552  702522 command_runner.go:130] > [crio.api]
	I0108 20:35:10.619573  702522 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:35:10.619611  702522 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:35:10.619640  702522 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:35:10.619660  702522 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:35:10.619694  702522 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:35:10.619727  702522 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:35:10.619747  702522 command_runner.go:130] > # stream_port = "0"
	I0108 20:35:10.619779  702522 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:35:10.619806  702522 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:35:10.619828  702522 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:35:10.619861  702522 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:35:10.619887  702522 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:35:10.619908  702522 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:35:10.619973  702522 command_runner.go:130] > # minutes.
	I0108 20:35:10.619996  702522 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:35:10.620027  702522 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:35:10.620048  702522 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:35:10.620100  702522 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:35:10.620242  702522 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:35:10.620287  702522 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:35:10.620329  702522 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:35:10.620352  702522 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:35:10.620375  702522 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:35:10.620408  702522 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 20:35:10.620435  702522 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:35:10.620455  702522 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 20:35:10.620501  702522 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:35:10.620528  702522 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:35:10.620548  702522 command_runner.go:130] > [crio.runtime]
	I0108 20:35:10.620582  702522 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:35:10.620602  702522 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:35:10.620621  702522 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:35:10.620658  702522 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:35:10.620682  702522 command_runner.go:130] > # default_ulimits = [
	I0108 20:35:10.620702  702522 command_runner.go:130] > # ]
	I0108 20:35:10.620738  702522 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:35:10.620762  702522 command_runner.go:130] > # no_pivot = false
	I0108 20:35:10.620784  702522 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:35:10.620819  702522 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:35:10.620843  702522 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:35:10.620866  702522 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:35:10.620900  702522 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:35:10.620927  702522 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:35:10.620949  702522 command_runner.go:130] > # conmon = ""
	I0108 20:35:10.620983  702522 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:35:10.621010  702522 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:35:10.621030  702522 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:35:10.621083  702522 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:35:10.621108  702522 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:35:10.621131  702522 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:35:10.621162  702522 command_runner.go:130] > # conmon_env = [
	I0108 20:35:10.621190  702522 command_runner.go:130] > # ]
	I0108 20:35:10.621223  702522 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:35:10.621259  702522 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:35:10.621280  702522 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:35:10.621299  702522 command_runner.go:130] > # default_env = [
	I0108 20:35:10.621329  702522 command_runner.go:130] > # ]
	I0108 20:35:10.621354  702522 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:35:10.621374  702522 command_runner.go:130] > # selinux = false
	I0108 20:35:10.621409  702522 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:35:10.621435  702522 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:35:10.621454  702522 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:35:10.621485  702522 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:35:10.621509  702522 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:35:10.621529  702522 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:35:10.621563  702522 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:35:10.621585  702522 command_runner.go:130] > # which might increase security.
	I0108 20:35:10.621603  702522 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 20:35:10.621639  702522 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:35:10.621664  702522 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:35:10.621686  702522 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:35:10.621723  702522 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:35:10.621749  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:35:10.621768  702522 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:35:10.621801  702522 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:35:10.621825  702522 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:35:10.621845  702522 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:35:10.621881  702522 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:35:10.621907  702522 command_runner.go:130] > # irqbalance daemon.
	I0108 20:35:10.621926  702522 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:35:10.621961  702522 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:35:10.621986  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:35:10.622006  702522 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:35:10.622041  702522 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:35:10.622064  702522 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:35:10.622087  702522 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:35:10.622119  702522 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:35:10.622146  702522 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:35:10.622168  702522 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:35:10.622201  702522 command_runner.go:130] > # will be added.
	I0108 20:35:10.622224  702522 command_runner.go:130] > # default_capabilities = [
	I0108 20:35:10.622243  702522 command_runner.go:130] > # 	"CHOWN",
	I0108 20:35:10.622278  702522 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:35:10.622299  702522 command_runner.go:130] > # 	"FSETID",
	I0108 20:35:10.622317  702522 command_runner.go:130] > # 	"FOWNER",
	I0108 20:35:10.622336  702522 command_runner.go:130] > # 	"SETGID",
	I0108 20:35:10.622364  702522 command_runner.go:130] > # 	"SETUID",
	I0108 20:35:10.622386  702522 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:35:10.622407  702522 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:35:10.622425  702522 command_runner.go:130] > # 	"KILL",
	I0108 20:35:10.622475  702522 command_runner.go:130] > # ]
	I0108 20:35:10.622510  702522 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 20:35:10.622531  702522 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 20:35:10.622566  702522 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 20:35:10.622590  702522 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:35:10.622612  702522 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:35:10.622645  702522 command_runner.go:130] > # default_sysctls = [
	I0108 20:35:10.622658  702522 command_runner.go:130] > # ]
	I0108 20:35:10.622665  702522 command_runner.go:130] > # List of devices on the host that a
	I0108 20:35:10.622672  702522 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:35:10.622677  702522 command_runner.go:130] > # allowed_devices = [
	I0108 20:35:10.622682  702522 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:35:10.622686  702522 command_runner.go:130] > # ]
	I0108 20:35:10.622701  702522 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:35:10.622727  702522 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:35:10.622739  702522 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:35:10.622746  702522 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:35:10.622754  702522 command_runner.go:130] > # additional_devices = [
	I0108 20:35:10.622759  702522 command_runner.go:130] > # ]
	I0108 20:35:10.622776  702522 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:35:10.622782  702522 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:35:10.622787  702522 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:35:10.622802  702522 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:35:10.622814  702522 command_runner.go:130] > # ]
	I0108 20:35:10.622822  702522 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:35:10.622830  702522 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:35:10.622839  702522 command_runner.go:130] > # Defaults to false.
	I0108 20:35:10.622846  702522 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:35:10.622866  702522 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:35:10.622882  702522 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:35:10.622887  702522 command_runner.go:130] > # hooks_dir = [
	I0108 20:35:10.622908  702522 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:35:10.622913  702522 command_runner.go:130] > # ]
	I0108 20:35:10.622921  702522 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:35:10.622933  702522 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:35:10.622939  702522 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:35:10.622944  702522 command_runner.go:130] > #
	I0108 20:35:10.622954  702522 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:35:10.622973  702522 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:35:10.622988  702522 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:35:10.622993  702522 command_runner.go:130] > #
	I0108 20:35:10.623001  702522 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:35:10.623014  702522 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:35:10.623022  702522 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:35:10.623031  702522 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:35:10.623036  702522 command_runner.go:130] > #
	I0108 20:35:10.623047  702522 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:35:10.623055  702522 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:35:10.623063  702522 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:35:10.623069  702522 command_runner.go:130] > # pids_limit = 0
	I0108 20:35:10.623082  702522 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:35:10.623093  702522 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:35:10.623101  702522 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:35:10.623113  702522 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:35:10.623118  702522 command_runner.go:130] > # log_size_max = -1
	I0108 20:35:10.623127  702522 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:35:10.623136  702522 command_runner.go:130] > # log_to_journald = false
	I0108 20:35:10.623143  702522 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:35:10.623150  702522 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:35:10.623156  702522 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:35:10.623163  702522 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:35:10.623172  702522 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:35:10.623185  702522 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:35:10.623192  702522 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:35:10.623197  702522 command_runner.go:130] > # read_only = false
	I0108 20:35:10.623207  702522 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:35:10.623215  702522 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:35:10.623222  702522 command_runner.go:130] > # live configuration reload.
	I0108 20:35:10.623227  702522 command_runner.go:130] > # log_level = "info"
	I0108 20:35:10.623234  702522 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:35:10.623241  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:35:10.623248  702522 command_runner.go:130] > # log_filter = ""
	I0108 20:35:10.623256  702522 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:35:10.623266  702522 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:35:10.623271  702522 command_runner.go:130] > # separated by comma.
	I0108 20:35:10.623276  702522 command_runner.go:130] > # uid_mappings = ""
	I0108 20:35:10.623290  702522 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:35:10.623298  702522 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:35:10.623303  702522 command_runner.go:130] > # separated by comma.
	I0108 20:35:10.623308  702522 command_runner.go:130] > # gid_mappings = ""
	I0108 20:35:10.623318  702522 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:35:10.623328  702522 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:35:10.623337  702522 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:35:10.623343  702522 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:35:10.623352  702522 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:35:10.623360  702522 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:35:10.623371  702522 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:35:10.623377  702522 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:35:10.623384  702522 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:35:10.623394  702522 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:35:10.623401  702522 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:35:10.623406  702522 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:35:10.623413  702522 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:35:10.623422  702522 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:35:10.623432  702522 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:35:10.623438  702522 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:35:10.623443  702522 command_runner.go:130] > # drop_infra_ctr = true
	I0108 20:35:10.623451  702522 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:35:10.623462  702522 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:35:10.623471  702522 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:35:10.623479  702522 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:35:10.623486  702522 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:35:10.623492  702522 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:35:10.623498  702522 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:35:10.623506  702522 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:35:10.623514  702522 command_runner.go:130] > # pinns_path = ""
	I0108 20:35:10.623522  702522 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:35:10.623529  702522 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:35:10.623540  702522 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:35:10.623545  702522 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:35:10.623552  702522 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:35:10.623564  702522 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:35:10.623576  702522 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:35:10.623582  702522 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:35:10.623597  702522 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:35:10.623607  702522 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:35:10.623632  702522 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:35:10.623636  702522 command_runner.go:130] > # ]
	I0108 20:35:10.623647  702522 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:35:10.623654  702522 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:35:10.623662  702522 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:35:10.623670  702522 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:35:10.623677  702522 command_runner.go:130] > #
	I0108 20:35:10.623685  702522 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:35:10.623692  702522 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:35:10.623699  702522 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:35:10.623705  702522 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:35:10.623711  702522 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:35:10.623716  702522 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:35:10.623730  702522 command_runner.go:130] > # Where:
	I0108 20:35:10.623737  702522 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:35:10.623745  702522 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:35:10.623752  702522 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:35:10.623760  702522 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:35:10.623767  702522 command_runner.go:130] > #   in $PATH.
	I0108 20:35:10.623775  702522 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:35:10.623781  702522 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:35:10.623788  702522 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:35:10.623796  702522 command_runner.go:130] > #   state.
	I0108 20:35:10.623804  702522 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:35:10.623831  702522 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:35:10.623842  702522 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:35:10.623849  702522 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:35:10.623857  702522 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:35:10.623865  702522 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:35:10.623871  702522 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:35:10.623879  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:35:10.623888  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:35:10.623895  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:35:10.623902  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:35:10.623912  702522 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:35:10.623923  702522 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:35:10.623930  702522 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:35:10.623938  702522 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:35:10.623945  702522 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:35:10.623955  702522 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:35:10.623961  702522 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 20:35:10.623968  702522 command_runner.go:130] > runtime_type = "oci"
	I0108 20:35:10.623977  702522 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:35:10.623982  702522 command_runner.go:130] > runtime_config_path = ""
	I0108 20:35:10.623990  702522 command_runner.go:130] > monitor_path = ""
	I0108 20:35:10.623995  702522 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:35:10.624000  702522 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:35:10.624022  702522 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:35:10.624031  702522 command_runner.go:130] > # running containers
	I0108 20:35:10.624037  702522 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:35:10.624044  702522 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:35:10.624053  702522 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:35:10.624063  702522 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:35:10.624070  702522 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:35:10.624076  702522 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:35:10.624084  702522 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:35:10.624090  702522 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:35:10.624096  702522 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:35:10.624104  702522 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:35:10.624113  702522 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:35:10.624122  702522 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:35:10.624130  702522 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:35:10.624139  702522 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:35:10.624151  702522 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:35:10.624159  702522 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:35:10.624173  702522 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:35:10.624183  702522 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:35:10.624194  702522 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:35:10.624211  702522 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:35:10.624218  702522 command_runner.go:130] > # Example:
	I0108 20:35:10.624224  702522 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:35:10.624230  702522 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:35:10.624239  702522 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:35:10.624246  702522 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:35:10.624251  702522 command_runner.go:130] > # cpuset = 0
	I0108 20:35:10.624258  702522 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:35:10.624262  702522 command_runner.go:130] > # Where:
	I0108 20:35:10.624268  702522 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:35:10.624281  702522 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:35:10.624288  702522 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:35:10.624298  702522 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:35:10.624308  702522 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:35:10.624316  702522 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:35:10.624324  702522 command_runner.go:130] > # 
	I0108 20:35:10.624332  702522 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:35:10.624336  702522 command_runner.go:130] > #
	I0108 20:35:10.624346  702522 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:35:10.624353  702522 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:35:10.624363  702522 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:35:10.624371  702522 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:35:10.624381  702522 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:35:10.624385  702522 command_runner.go:130] > [crio.image]
	I0108 20:35:10.624392  702522 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:35:10.624398  702522 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:35:10.624408  702522 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:35:10.624416  702522 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:35:10.624423  702522 command_runner.go:130] > # global_auth_file = ""
	I0108 20:35:10.624432  702522 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:35:10.624441  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:35:10.624447  702522 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:35:10.624455  702522 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:35:10.624466  702522 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:35:10.624472  702522 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:35:10.624477  702522 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:35:10.624490  702522 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:35:10.624498  702522 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:35:10.624505  702522 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:35:10.624515  702522 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:35:10.624520  702522 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:35:10.624528  702522 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:35:10.624538  702522 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:35:10.624546  702522 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:35:10.624557  702522 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:35:10.624563  702522 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:35:10.624573  702522 command_runner.go:130] > # signature_policy = ""
	I0108 20:35:10.624581  702522 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:35:10.624589  702522 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:35:10.624597  702522 command_runner.go:130] > # changing them here.
	I0108 20:35:10.624602  702522 command_runner.go:130] > # insecure_registries = [
	I0108 20:35:10.624607  702522 command_runner.go:130] > # ]
	I0108 20:35:10.624617  702522 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:35:10.624624  702522 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:35:10.624632  702522 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:35:10.624638  702522 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:35:10.624644  702522 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:35:10.624655  702522 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:35:10.624660  702522 command_runner.go:130] > # CNI plugins.
	I0108 20:35:10.624664  702522 command_runner.go:130] > [crio.network]
	I0108 20:35:10.624674  702522 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:35:10.624686  702522 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:35:10.624691  702522 command_runner.go:130] > # cni_default_network = ""
	I0108 20:35:10.624698  702522 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:35:10.624707  702522 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:35:10.624715  702522 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:35:10.624724  702522 command_runner.go:130] > # plugin_dirs = [
	I0108 20:35:10.624729  702522 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:35:10.624733  702522 command_runner.go:130] > # ]
	I0108 20:35:10.624745  702522 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:35:10.624750  702522 command_runner.go:130] > [crio.metrics]
	I0108 20:35:10.624756  702522 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:35:10.624761  702522 command_runner.go:130] > # enable_metrics = false
	I0108 20:35:10.624770  702522 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:35:10.624780  702522 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:35:10.624788  702522 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:35:10.624801  702522 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:35:10.624809  702522 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:35:10.624818  702522 command_runner.go:130] > # metrics_collectors = [
	I0108 20:35:10.624823  702522 command_runner.go:130] > # 	"operations",
	I0108 20:35:10.624830  702522 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:35:10.624839  702522 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:35:10.624845  702522 command_runner.go:130] > # 	"operations_errors",
	I0108 20:35:10.624851  702522 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:35:10.624866  702522 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:35:10.624871  702522 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:35:10.624876  702522 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:35:10.624882  702522 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:35:10.624889  702522 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:35:10.624895  702522 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:35:10.624902  702522 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:35:10.624907  702522 command_runner.go:130] > # 	"containers_oom",
	I0108 20:35:10.624912  702522 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:35:10.624919  702522 command_runner.go:130] > # 	"operations_total",
	I0108 20:35:10.624925  702522 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:35:10.624933  702522 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:35:10.624939  702522 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:35:10.624945  702522 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:35:10.624953  702522 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:35:10.624959  702522 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:35:10.624964  702522 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:35:10.624970  702522 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:35:10.624978  702522 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:35:10.624987  702522 command_runner.go:130] > # ]
	I0108 20:35:10.624996  702522 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:35:10.625006  702522 command_runner.go:130] > # metrics_port = 9090
	I0108 20:35:10.625013  702522 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:35:10.625023  702522 command_runner.go:130] > # metrics_socket = ""
	I0108 20:35:10.625030  702522 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:35:10.625037  702522 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:35:10.625048  702522 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:35:10.625054  702522 command_runner.go:130] > # certificate on any modification event.
	I0108 20:35:10.625058  702522 command_runner.go:130] > # metrics_cert = ""
	I0108 20:35:10.625072  702522 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:35:10.625081  702522 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:35:10.625086  702522 command_runner.go:130] > # metrics_key = ""
	I0108 20:35:10.625093  702522 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:35:10.625101  702522 command_runner.go:130] > [crio.tracing]
	I0108 20:35:10.625109  702522 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:35:10.625114  702522 command_runner.go:130] > # enable_tracing = false
	I0108 20:35:10.625123  702522 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:35:10.625129  702522 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:35:10.625136  702522 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:35:10.625145  702522 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:35:10.625153  702522 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:35:10.625162  702522 command_runner.go:130] > [crio.stats]
	I0108 20:35:10.625173  702522 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:35:10.625179  702522 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:35:10.625187  702522 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:35:10.625227  702522 command_runner.go:130] ! time="2024-01-08 20:35:10.613902847Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 20:35:10.625244  702522 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:35:10.625307  702522 cni.go:84] Creating CNI manager for ""
	I0108 20:35:10.625320  702522 cni.go:136] 2 nodes found, recommending kindnet
	I0108 20:35:10.625329  702522 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:35:10.625349  702522 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-933566 NodeName:multinode-933566-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:35:10.625482  702522 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-933566-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:35:10.625542  702522 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-933566-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-933566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:35:10.625612  702522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:35:10.636577  702522 command_runner.go:130] > kubeadm
	I0108 20:35:10.636595  702522 command_runner.go:130] > kubectl
	I0108 20:35:10.636600  702522 command_runner.go:130] > kubelet
	I0108 20:35:10.636613  702522 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:35:10.636667  702522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 20:35:10.647253  702522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 20:35:10.669038  702522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:35:10.691431  702522 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:35:10.696216  702522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:35:10.709817  702522 host.go:66] Checking if "multinode-933566" exists ...
	I0108 20:35:10.710086  702522 start.go:304] JoinCluster: &{Name:multinode-933566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-933566 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:35:10.710190  702522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 20:35:10.710241  702522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:35:10.710650  702522 config.go:182] Loaded profile config "multinode-933566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:35:10.731883  702522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:35:10.908027  702522 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token q4ak2j.h2h2vu3ewed45bjn --discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a 
	I0108 20:35:10.908077  702522 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:35:10.908121  702522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q4ak2j.h2h2vu3ewed45bjn --discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-933566-m02"
	I0108 20:35:10.955039  702522 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 20:35:10.997758  702522 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:35:10.997786  702522 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0108 20:35:10.997793  702522 command_runner.go:130] > OS: Linux
	I0108 20:35:10.997799  702522 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 20:35:10.997807  702522 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 20:35:10.997813  702522 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 20:35:10.997820  702522 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 20:35:10.997830  702522 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 20:35:10.997838  702522 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 20:35:10.997854  702522 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 20:35:10.997860  702522 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 20:35:10.997869  702522 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 20:35:11.107197  702522 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 20:35:11.107225  702522 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 20:35:11.137444  702522 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:35:11.137604  702522 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:35:11.137619  702522 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:35:11.234417  702522 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 20:35:13.752432  702522 command_runner.go:130] > This node has joined the cluster:
	I0108 20:35:13.752457  702522 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 20:35:13.752466  702522 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 20:35:13.752474  702522 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 20:35:13.755831  702522 command_runner.go:130] ! W0108 20:35:10.954582    1026 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 20:35:13.755859  702522 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 20:35:13.755872  702522 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:35:13.755886  702522 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q4ak2j.h2h2vu3ewed45bjn --discovery-token-ca-cert-hash sha256:7781d8275fe6fc370b9207d46f90d60f186320d9f0d72d24606e41c221afb39a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-933566-m02": (2.847747628s)
	I0108 20:35:13.755904  702522 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 20:35:13.970803  702522 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0108 20:35:13.970891  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-933566 minikube.k8s.io/updated_at=2024_01_08T20_35_13_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:35:14.077946  702522 command_runner.go:130] > node/multinode-933566-m02 labeled
	I0108 20:35:14.081653  702522 start.go:306] JoinCluster complete in 3.371560054s
	I0108 20:35:14.081681  702522 cni.go:84] Creating CNI manager for ""
	I0108 20:35:14.081687  702522 cni.go:136] 2 nodes found, recommending kindnet
	I0108 20:35:14.081742  702522 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:35:14.086769  702522 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:35:14.086794  702522 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0108 20:35:14.086802  702522 command_runner.go:130] > Device: 3ah/58d	Inode: 1572315     Links: 1
	I0108 20:35:14.086810  702522 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:35:14.086818  702522 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0108 20:35:14.086826  702522 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0108 20:35:14.086836  702522 command_runner.go:130] > Change: 2024-01-08 20:10:27.984657575 +0000
	I0108 20:35:14.086843  702522 command_runner.go:130] >  Birth: 2024-01-08 20:10:27.940657342 +0000
	I0108 20:35:14.086887  702522 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:35:14.086899  702522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:35:14.108373  702522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:35:14.422602  702522 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:35:14.428624  702522 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:35:14.431705  702522 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 20:35:14.445707  702522 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 20:35:14.451556  702522 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:35:14.451837  702522 kapi.go:59] client config for multinode-933566: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.key", CAFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:35:14.452157  702522 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:35:14.452174  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:14.452184  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:14.452193  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:14.460807  702522 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 20:35:14.460832  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:14.460841  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:14.460848  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:14.460855  702522 round_trippers.go:580]     Content-Length: 291
	I0108 20:35:14.460861  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:14 GMT
	I0108 20:35:14.460874  702522 round_trippers.go:580]     Audit-Id: eec2f1aa-867e-4e3c-aa16-8fe0cd4a8ae6
	I0108 20:35:14.460880  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:14.460891  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:14.460913  702522 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3819ad3-831c-4511-bd60-afe254a308f4","resourceVersion":"460","creationTimestamp":"2024-01-08T20:34:13Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 20:35:14.461002  702522 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-933566" context rescaled to 1 replicas
	I0108 20:35:14.461032  702522 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:35:14.465002  702522 out.go:177] * Verifying Kubernetes components...
	I0108 20:35:14.467043  702522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:35:14.504735  702522 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:35:14.505015  702522 kapi.go:59] client config for multinode-933566: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/profiles/multinode-933566/client.key", CAFile:"/home/jenkins/minikube-integration/17907-633350/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:35:14.505286  702522 node_ready.go:35] waiting up to 6m0s for node "multinode-933566-m02" to be "Ready" ...
	I0108 20:35:14.505356  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566-m02
	I0108 20:35:14.505362  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:14.505371  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:14.505377  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:14.511245  702522 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:35:14.511318  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:14.511341  702522 round_trippers.go:580]     Audit-Id: e346e3c0-58b7-4c85-ae7d-dece0fcc02db
	I0108 20:35:14.511362  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:14.511386  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:14.511414  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:14.511434  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:14.511459  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:14 GMT
	I0108 20:35:14.513456  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566-m02","uid":"a54f9fe1-398b-48b5-b482-b6628a79b549","resourceVersion":"496","creationTimestamp":"2024-01-08T20:35:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_35_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:35:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0108 20:35:15.006191  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566-m02
	I0108 20:35:15.006220  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:15.006233  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:15.006242  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:15.008998  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:15.009023  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:15.009035  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:15.009042  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:15.009048  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:15.009055  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:15.009062  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:15 GMT
	I0108 20:35:15.009068  702522 round_trippers.go:580]     Audit-Id: 45c2af66-9758-4a6e-ae00-e98285b3582c
	I0108 20:35:15.009285  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566-m02","uid":"a54f9fe1-398b-48b5-b482-b6628a79b549","resourceVersion":"496","creationTimestamp":"2024-01-08T20:35:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_35_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:35:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0108 20:35:15.505525  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566-m02
	I0108 20:35:15.505550  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:15.505560  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:15.505567  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:15.507987  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:15.508042  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:15.508057  702522 round_trippers.go:580]     Audit-Id: 9d8d7aa4-3c8d-4e89-b652-c173b4b0416d
	I0108 20:35:15.508065  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:15.508071  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:15.508077  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:15.508084  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:15.508094  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:15 GMT
	I0108 20:35:15.508503  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566-m02","uid":"a54f9fe1-398b-48b5-b482-b6628a79b549","resourceVersion":"496","creationTimestamp":"2024-01-08T20:35:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_35_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:35:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0108 20:35:16.006273  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566-m02
	I0108 20:35:16.006301  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.006311  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.006319  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.008770  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.008806  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.008816  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.008822  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.008829  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.008837  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.008846  702522 round_trippers.go:580]     Audit-Id: 80930ea2-788e-4b3c-916b-abb6d898ecbf
	I0108 20:35:16.008852  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.009119  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566-m02","uid":"a54f9fe1-398b-48b5-b482-b6628a79b549","resourceVersion":"513","creationTimestamp":"2024-01-08T20:35:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_35_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:35:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5810 chars]
	I0108 20:35:16.009551  702522 node_ready.go:49] node "multinode-933566-m02" has status "Ready":"True"
	I0108 20:35:16.009570  702522 node_ready.go:38] duration metric: took 1.504267686s waiting for node "multinode-933566-m02" to be "Ready" ...
	I0108 20:35:16.009594  702522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:35:16.009673  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:35:16.009683  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.009692  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.009699  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.013213  702522 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:35:16.013240  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.013249  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.013256  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.013262  702522 round_trippers.go:580]     Audit-Id: 1bf7ba8c-a17c-4be3-b43a-6998a7efc035
	I0108 20:35:16.013268  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.013275  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.013281  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.013859  702522 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2945x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"dfb7da4b-0626-4cd3-accf-49736fec486b","resourceVersion":"455","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"72dec1d4-3d8b-4eb1-86df-aa1268e266be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72dec1d4-3d8b-4eb1-86df-aa1268e266be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0108 20:35:16.016768  702522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2945x" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.016868  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2945x
	I0108 20:35:16.016879  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.016888  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.016895  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.019289  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.019309  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.019317  702522 round_trippers.go:580]     Audit-Id: 26051c3e-948b-42ca-b78f-3d7c1cc52826
	I0108 20:35:16.019324  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.019331  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.019341  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.019354  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.019360  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.019517  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2945x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"dfb7da4b-0626-4cd3-accf-49736fec486b","resourceVersion":"455","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"72dec1d4-3d8b-4eb1-86df-aa1268e266be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72dec1d4-3d8b-4eb1-86df-aa1268e266be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 20:35:16.020008  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:35:16.020025  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.020034  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.020041  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.022180  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.022230  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.022252  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.022272  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.022307  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.022332  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.022354  702522 round_trippers.go:580]     Audit-Id: 8540de52-639f-4113-bc9d-379e9aeb26de
	I0108 20:35:16.022390  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.022515  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:35:16.022923  702522 pod_ready.go:92] pod "coredns-5dd5756b68-2945x" in "kube-system" namespace has status "Ready":"True"
	I0108 20:35:16.022943  702522 pod_ready.go:81] duration metric: took 6.149069ms waiting for pod "coredns-5dd5756b68-2945x" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.022955  702522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.023016  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-933566
	I0108 20:35:16.023025  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.023032  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.023039  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.025214  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.025264  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.025284  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.025302  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.025338  702522 round_trippers.go:580]     Audit-Id: 8d7597ab-d2a9-4585-8f17-4d50f4092127
	I0108 20:35:16.025348  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.025355  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.025361  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.025450  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-933566","namespace":"kube-system","uid":"c53171eb-8f57-4639-9d22-203811cf58f2","resourceVersion":"424","creationTimestamp":"2024-01-08T20:34:13Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"497df64d2b22876329916355928b85ab","kubernetes.io/config.mirror":"497df64d2b22876329916355928b85ab","kubernetes.io/config.seen":"2024-01-08T20:34:06.010525199Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 20:35:16.025879  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:35:16.025895  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.025904  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.025911  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.028079  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.028142  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.028155  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.028162  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.028177  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.028190  702522 round_trippers.go:580]     Audit-Id: 387ff17e-78f6-4df8-a877-d589b5554e13
	I0108 20:35:16.028196  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.028204  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.028313  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:35:16.028695  702522 pod_ready.go:92] pod "etcd-multinode-933566" in "kube-system" namespace has status "Ready":"True"
	I0108 20:35:16.028712  702522 pod_ready.go:81] duration metric: took 5.746563ms waiting for pod "etcd-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.028728  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.028788  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-933566
	I0108 20:35:16.028798  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.028805  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.028812  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.030833  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.030854  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.030863  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.030870  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.030876  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.030882  702522 round_trippers.go:580]     Audit-Id: 4b5f554f-78e7-4acd-8fcf-3512bffb066f
	I0108 20:35:16.030890  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.030902  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.031106  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-933566","namespace":"kube-system","uid":"6a1a3eb3-8722-42af-89c7-99e38dd67209","resourceVersion":"425","creationTimestamp":"2024-01-08T20:34:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7cce4a89903b4d928f46e67897916f84","kubernetes.io/config.mirror":"7cce4a89903b4d928f46e67897916f84","kubernetes.io/config.seen":"2024-01-08T20:34:13.891794977Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 20:35:16.031634  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:35:16.031651  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.031661  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.031668  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.033735  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.033757  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.033765  702522 round_trippers.go:580]     Audit-Id: 43569438-661d-48f6-8fe6-49b433210a7c
	I0108 20:35:16.033771  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.033777  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.033784  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.033790  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.033800  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.034072  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:35:16.034467  702522 pod_ready.go:92] pod "kube-apiserver-multinode-933566" in "kube-system" namespace has status "Ready":"True"
	I0108 20:35:16.034485  702522 pod_ready.go:81] duration metric: took 5.744577ms waiting for pod "kube-apiserver-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.034497  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.034561  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-933566
	I0108 20:35:16.034571  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.034579  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.034589  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.036673  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.036694  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.036732  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.036740  702522 round_trippers.go:580]     Audit-Id: 4cf52712-b0b7-4a00-a911-4613f7488e81
	I0108 20:35:16.036751  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.036757  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.036764  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.036772  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.036955  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-933566","namespace":"kube-system","uid":"1138bd89-ec87-4e01-8763-875d458d57a2","resourceVersion":"426","creationTimestamp":"2024-01-08T20:34:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92a9565c2579701599da39a047c04246","kubernetes.io/config.mirror":"92a9565c2579701599da39a047c04246","kubernetes.io/config.seen":"2024-01-08T20:34:13.891800893Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 20:35:16.037434  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:35:16.037448  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.037457  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.037464  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.039565  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.039587  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.039595  702522 round_trippers.go:580]     Audit-Id: 23acba4d-92c7-4513-847d-0a4efdda348f
	I0108 20:35:16.039616  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.039630  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.039638  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.039651  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.039658  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.039917  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:35:16.040303  702522 pod_ready.go:92] pod "kube-controller-manager-multinode-933566" in "kube-system" namespace has status "Ready":"True"
	I0108 20:35:16.040321  702522 pod_ready.go:81] duration metric: took 5.81145ms waiting for pod "kube-controller-manager-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.040333  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ffphz" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.206684  702522 request.go:629] Waited for 166.283192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffphz
	I0108 20:35:16.206819  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffphz
	I0108 20:35:16.206833  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.206842  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.206850  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.209353  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.209467  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.209503  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.209524  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.209532  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.209541  702522 round_trippers.go:580]     Audit-Id: 2389a7fc-ca66-4f58-9ca3-87d4ba1cf09f
	I0108 20:35:16.209551  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.209572  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.209708  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ffphz","generateName":"kube-proxy-","namespace":"kube-system","uid":"9d7cc0b9-6f1e-4c4c-9fce-6787071a095d","resourceVersion":"509","creationTimestamp":"2024-01-08T20:35:13Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d86a78ab-417d-4a21-a1de-e0a57cc46b17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:35:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d86a78ab-417d-4a21-a1de-e0a57cc46b17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 20:35:16.406507  702522 request.go:629] Waited for 196.300242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-933566-m02
	I0108 20:35:16.406571  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566-m02
	I0108 20:35:16.406582  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.406592  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.406601  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.409165  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.409189  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.409198  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.409205  702522 round_trippers.go:580]     Audit-Id: f34fbe65-9451-4b9e-958a-84ee4fc1dafa
	I0108 20:35:16.409212  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.409218  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.409226  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.409235  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.409343  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566-m02","uid":"a54f9fe1-398b-48b5-b482-b6628a79b549","resourceVersion":"513","creationTimestamp":"2024-01-08T20:35:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_35_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:35:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5810 chars]
	I0108 20:35:16.409763  702522 pod_ready.go:92] pod "kube-proxy-ffphz" in "kube-system" namespace has status "Ready":"True"
	I0108 20:35:16.409781  702522 pod_ready.go:81] duration metric: took 369.438027ms waiting for pod "kube-proxy-ffphz" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.409793  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lljgl" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.606544  702522 request.go:629] Waited for 196.609562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lljgl
	I0108 20:35:16.606606  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lljgl
	I0108 20:35:16.606616  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.606625  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.606635  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.609134  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.609161  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.609169  702522 round_trippers.go:580]     Audit-Id: 831bf3d2-6ab7-48ba-93a3-ac8a81a68205
	I0108 20:35:16.609176  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.609182  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.609188  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.609202  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.609212  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.609345  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lljgl","generateName":"kube-proxy-","namespace":"kube-system","uid":"7c0d75bb-8b31-4b55-8972-26dc6c5debb7","resourceVersion":"420","creationTimestamp":"2024-01-08T20:34:26Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d86a78ab-417d-4a21-a1de-e0a57cc46b17","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d86a78ab-417d-4a21-a1de-e0a57cc46b17\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 20:35:16.806602  702522 request.go:629] Waited for 196.752571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:35:16.806659  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:35:16.806665  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:16.806680  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:16.806692  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:16.809316  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:16.809372  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:16.809395  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:16.809417  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:16.809456  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:16.809481  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:16.809501  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:16 GMT
	I0108 20:35:16.809526  702522 round_trippers.go:580]     Audit-Id: 5e3eb0f6-d4e0-44e0-bada-8bf8e33f3b9e
	I0108 20:35:16.809659  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:35:16.810078  702522 pod_ready.go:92] pod "kube-proxy-lljgl" in "kube-system" namespace has status "Ready":"True"
	I0108 20:35:16.810099  702522 pod_ready.go:81] duration metric: took 400.299893ms waiting for pod "kube-proxy-lljgl" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:16.810111  702522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:17.006801  702522 request.go:629] Waited for 196.617709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-933566
	I0108 20:35:17.006868  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-933566
	I0108 20:35:17.006878  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:17.006905  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:17.006924  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:17.009472  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:17.009508  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:17.009569  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:17.009577  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:17.009584  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:17.009593  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:17.009599  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:17 GMT
	I0108 20:35:17.009615  702522 round_trippers.go:580]     Audit-Id: 4fbaf520-acea-416e-8bfd-1baa07c64a9b
	I0108 20:35:17.009724  702522 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-933566","namespace":"kube-system","uid":"8806e2fe-d851-40d7-84bb-c2e96df92fc8","resourceVersion":"427","creationTimestamp":"2024-01-08T20:34:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b9d9df99a15b95ebb51416e26bd0091a","kubernetes.io/config.mirror":"b9d9df99a15b95ebb51416e26bd0091a","kubernetes.io/config.seen":"2024-01-08T20:34:13.891802386Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:34:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 20:35:17.206333  702522 request.go:629] Waited for 196.193337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:35:17.206391  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-933566
	I0108 20:35:17.206402  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:17.206411  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:17.206420  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:17.208915  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:17.208936  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:17.208944  702522 round_trippers.go:580]     Audit-Id: cc05263a-e0b6-4cab-9d61-3e6081e36f5b
	I0108 20:35:17.208951  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:17.208958  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:17.208964  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:17.208971  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:17.208980  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:17 GMT
	I0108 20:35:17.209086  702522 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:34:10Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 20:35:17.209489  702522 pod_ready.go:92] pod "kube-scheduler-multinode-933566" in "kube-system" namespace has status "Ready":"True"
	I0108 20:35:17.209507  702522 pod_ready.go:81] duration metric: took 399.380681ms waiting for pod "kube-scheduler-multinode-933566" in "kube-system" namespace to be "Ready" ...
	I0108 20:35:17.209522  702522 pod_ready.go:38] duration metric: took 1.199909411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:35:17.209539  702522 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:35:17.209598  702522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:35:17.223266  702522 system_svc.go:56] duration metric: took 13.71964ms WaitForService to wait for kubelet.
	I0108 20:35:17.223291  702522 kubeadm.go:581] duration metric: took 2.762236074s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:35:17.223311  702522 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:35:17.406649  702522 request.go:629] Waited for 183.265048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 20:35:17.406744  702522 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 20:35:17.406773  702522 round_trippers.go:469] Request Headers:
	I0108 20:35:17.406788  702522 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:35:17.406797  702522 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 20:35:17.409340  702522 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:35:17.409375  702522 round_trippers.go:577] Response Headers:
	I0108 20:35:17.409383  702522 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ece8876-04bf-4b84-8cd8-bd83235add3e
	I0108 20:35:17.409389  702522 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:35:17 GMT
	I0108 20:35:17.409395  702522 round_trippers.go:580]     Audit-Id: cad5f9de-7097-4711-b60c-3e3b5fba0403
	I0108 20:35:17.409402  702522 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:35:17.409408  702522 round_trippers.go:580]     Content-Type: application/json
	I0108 20:35:17.409414  702522 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cb8445d-744e-417d-a1a4-28b161133e98
	I0108 20:35:17.409603  702522 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"516"},"items":[{"metadata":{"name":"multinode-933566","uid":"c759972b-5900-4e38-bb12-91595aa184af","resourceVersion":"437","creationTimestamp":"2024-01-08T20:34:10Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-933566","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-933566","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_34_14_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12884 chars]
	I0108 20:35:17.410250  702522 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:35:17.410272  702522 node_conditions.go:123] node cpu capacity is 2
	I0108 20:35:17.410282  702522 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 20:35:17.410292  702522 node_conditions.go:123] node cpu capacity is 2
	I0108 20:35:17.410297  702522 node_conditions.go:105] duration metric: took 186.980363ms to run NodePressure ...
	I0108 20:35:17.410312  702522 start.go:228] waiting for startup goroutines ...
	I0108 20:35:17.410336  702522 start.go:242] writing updated cluster config ...
	I0108 20:35:17.410665  702522 ssh_runner.go:195] Run: rm -f paused
	I0108 20:35:17.479831  702522 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:35:17.483470  702522 out.go:177] * Done! kubectl is now configured to use "multinode-933566" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 20:34:58 multinode-933566 crio[898]: time="2024-01-08 20:34:58.310139771Z" level=info msg="Starting container: e36adcf754edcb5599c705e31ee17bff9995cb98927c3ca86d9f78e0d6d5648d" id=be874517-4740-43f4-9f24-8d5239e8750e name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 20:34:58 multinode-933566 crio[898]: time="2024-01-08 20:34:58.332006709Z" level=info msg="Started container" PID=1902 containerID=e36adcf754edcb5599c705e31ee17bff9995cb98927c3ca86d9f78e0d6d5648d description=kube-system/storage-provisioner/storage-provisioner id=be874517-4740-43f4-9f24-8d5239e8750e name=/runtime.v1.RuntimeService/StartContainer sandboxID=24604e553b0ebdf7e5fa2b7aac01fd50eb8394fa396aaddb5958ed7293512df4
	Jan 08 20:34:58 multinode-933566 crio[898]: time="2024-01-08 20:34:58.353194149Z" level=info msg="Created container cc06f3055dc53d49eaa7cef449ecef05acf8a7093a6c8ddf95fad7f5b4de358f: kube-system/coredns-5dd5756b68-2945x/coredns" id=991bbfc9-8698-46e5-9cb1-d68987a16afd name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 20:34:58 multinode-933566 crio[898]: time="2024-01-08 20:34:58.353691114Z" level=info msg="Starting container: cc06f3055dc53d49eaa7cef449ecef05acf8a7093a6c8ddf95fad7f5b4de358f" id=feee481e-5327-427f-90c9-3d826bbe0447 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 20:34:58 multinode-933566 crio[898]: time="2024-01-08 20:34:58.364658425Z" level=info msg="Started container" PID=1925 containerID=cc06f3055dc53d49eaa7cef449ecef05acf8a7093a6c8ddf95fad7f5b4de358f description=kube-system/coredns-5dd5756b68-2945x/coredns id=feee481e-5327-427f-90c9-3d826bbe0447 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6065585b7906af0fc3e3726e1eec4a2435e76cc98a7256766164899450ac4e8
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.565708240Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-lxnll/POD" id=2c11927e-d918-4a8e-8c66-3e77dab55e40 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.565780905Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.582932962Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-lxnll Namespace:default ID:4bcd415d69e4004e743ef1001360a0e188cd6f4ae9f64bef557aa6e33caafab4 UID:ea67277c-eed7-4456-8c66-ed5772668c58 NetNS:/var/run/netns/86431183-be00-4565-9b29-7a0510187f03 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.582972724Z" level=info msg="Adding pod default_busybox-5bc68d56bd-lxnll to CNI network \"kindnet\" (type=ptp)"
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.594063589Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-lxnll Namespace:default ID:4bcd415d69e4004e743ef1001360a0e188cd6f4ae9f64bef557aa6e33caafab4 UID:ea67277c-eed7-4456-8c66-ed5772668c58 NetNS:/var/run/netns/86431183-be00-4565-9b29-7a0510187f03 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.594212924Z" level=info msg="Checking pod default_busybox-5bc68d56bd-lxnll for CNI network kindnet (type=ptp)"
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.596913937Z" level=info msg="Ran pod sandbox 4bcd415d69e4004e743ef1001360a0e188cd6f4ae9f64bef557aa6e33caafab4 with infra container: default/busybox-5bc68d56bd-lxnll/POD" id=2c11927e-d918-4a8e-8c66-3e77dab55e40 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.602890451Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=ae38c3ff-4262-4a26-8c37-b7c83f56e940 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.603115134Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=ae38c3ff-4262-4a26-8c37-b7c83f56e940 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.608579142Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=66faec08-39e7-4a14-8fa0-d95b421d44ea name=/runtime.v1.ImageService/PullImage
	Jan 08 20:35:19 multinode-933566 crio[898]: time="2024-01-08 20:35:19.609681798Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 20:35:20 multinode-933566 crio[898]: time="2024-01-08 20:35:20.093874979Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 20:35:21 multinode-933566 crio[898]: time="2024-01-08 20:35:21.100098359Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=66faec08-39e7-4a14-8fa0-d95b421d44ea name=/runtime.v1.ImageService/PullImage
	Jan 08 20:35:21 multinode-933566 crio[898]: time="2024-01-08 20:35:21.102957232Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=ce5f2741-f5cb-4453-9549-9edd40fb31be name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:35:21 multinode-933566 crio[898]: time="2024-01-08 20:35:21.103637197Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ce5f2741-f5cb-4453-9549-9edd40fb31be name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:35:21 multinode-933566 crio[898]: time="2024-01-08 20:35:21.105866436Z" level=info msg="Creating container: default/busybox-5bc68d56bd-lxnll/busybox" id=6635f7e4-f280-43a8-9cdd-a76e88582251 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 20:35:21 multinode-933566 crio[898]: time="2024-01-08 20:35:21.106059874Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 20:35:21 multinode-933566 crio[898]: time="2024-01-08 20:35:21.167874102Z" level=info msg="Created container 8b3e52f03fff722633a1790042f3062c3cebbf1db9984bb321b477d11e88f442: default/busybox-5bc68d56bd-lxnll/busybox" id=6635f7e4-f280-43a8-9cdd-a76e88582251 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 20:35:21 multinode-933566 crio[898]: time="2024-01-08 20:35:21.170407556Z" level=info msg="Starting container: 8b3e52f03fff722633a1790042f3062c3cebbf1db9984bb321b477d11e88f442" id=f3a93e43-231d-482a-9e3e-f6b008698a3f name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 20:35:21 multinode-933566 crio[898]: time="2024-01-08 20:35:21.187616959Z" level=info msg="Started container" PID=2057 containerID=8b3e52f03fff722633a1790042f3062c3cebbf1db9984bb321b477d11e88f442 description=default/busybox-5bc68d56bd-lxnll/busybox id=f3a93e43-231d-482a-9e3e-f6b008698a3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bcd415d69e4004e743ef1001360a0e188cd6f4ae9f64bef557aa6e33caafab4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8b3e52f03fff7       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   4bcd415d69e40       busybox-5bc68d56bd-lxnll
	cc06f3055dc53       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      28 seconds ago       Running             coredns                   0                   a6065585b7906       coredns-5dd5756b68-2945x
	e36adcf754edc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      28 seconds ago       Running             storage-provisioner       0                   24604e553b0eb       storage-provisioner
	3ac6c1baba456       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      58 seconds ago       Running             kube-proxy                0                   9980bba821af1       kube-proxy-lljgl
	86cd01d41d2b8       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      59 seconds ago       Running             kindnet-cni               0                   7220795b6db32       kindnet-7wmrt
	06e3ce0f154f8       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   17aad629c6316       kube-apiserver-multinode-933566
	a6da31534f960       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   475c3268437b5       kube-controller-manager-multinode-933566
	50a403a1be15f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   4d439db5859f0       etcd-multinode-933566
	a452540f453c0       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   2c76cdaaa828e       kube-scheduler-multinode-933566
	
	
	==> coredns [cc06f3055dc53d49eaa7cef449ecef05acf8a7093a6c8ddf95fad7f5b4de358f] <==
	[INFO] 10.244.0.3:47714 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071606s
	[INFO] 10.244.1.2:43974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102336s
	[INFO] 10.244.1.2:45741 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001056338s
	[INFO] 10.244.1.2:57508 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061465s
	[INFO] 10.244.1.2:56390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066274s
	[INFO] 10.244.1.2:39636 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00080822s
	[INFO] 10.244.1.2:40897 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059037s
	[INFO] 10.244.1.2:50690 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053769s
	[INFO] 10.244.1.2:54417 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056543s
	[INFO] 10.244.0.3:36837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123217s
	[INFO] 10.244.0.3:50982 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099529s
	[INFO] 10.244.0.3:43294 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072107s
	[INFO] 10.244.0.3:58181 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054269s
	[INFO] 10.244.1.2:49932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137937s
	[INFO] 10.244.1.2:36397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080279s
	[INFO] 10.244.1.2:49973 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088329s
	[INFO] 10.244.1.2:34476 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060989s
	[INFO] 10.244.0.3:44215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010579s
	[INFO] 10.244.0.3:48541 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114865s
	[INFO] 10.244.0.3:41379 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149154s
	[INFO] 10.244.0.3:52599 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102992s
	[INFO] 10.244.1.2:55895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140621s
	[INFO] 10.244.1.2:45023 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084604s
	[INFO] 10.244.1.2:51880 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008526s
	[INFO] 10.244.1.2:38730 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088846s
	
	
	==> describe nodes <==
	Name:               multinode-933566
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-933566
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-933566
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_34_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-933566
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:35:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:34:57 +0000   Mon, 08 Jan 2024 20:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:34:57 +0000   Mon, 08 Jan 2024 20:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:34:57 +0000   Mon, 08 Jan 2024 20:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:34:57 +0000   Mon, 08 Jan 2024 20:34:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-933566
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 acea0b509491411d97cc6140a99331cb
	  System UUID:                9ef6de11-ce47-4fd2-bd3b-e5fed0556dc6
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-lxnll                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-2945x                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-multinode-933566                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-7wmrt                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      60s
	  kube-system                 kube-apiserver-multinode-933566             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-multinode-933566    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-lljgl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-multinode-933566             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node multinode-933566 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node multinode-933566 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x8 over 80s)  kubelet          Node multinode-933566 status is now: NodeHasSufficientPID
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node multinode-933566 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node multinode-933566 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet          Node multinode-933566 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           60s                node-controller  Node multinode-933566 event: Registered Node multinode-933566 in Controller
	  Normal  NodeReady                29s                kubelet          Node multinode-933566 status is now: NodeReady
	
	
	Name:               multinode-933566-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-933566-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-933566
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T20_35_13_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:35:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-933566-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:35:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:35:15 +0000   Mon, 08 Jan 2024 20:35:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:35:15 +0000   Mon, 08 Jan 2024 20:35:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:35:15 +0000   Mon, 08 Jan 2024 20:35:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:35:15 +0000   Mon, 08 Jan 2024 20:35:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-933566-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 88e04ea93e274da9b45a195e2fa5217e
	  System UUID:                c536972e-abd6-4254-b8f9-7f7bfe586bf5
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zsk76    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-scpd5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13s
	  kube-system                 kube-proxy-ffphz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  13s (x5 over 14s)  kubelet          Node multinode-933566-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x5 over 14s)  kubelet          Node multinode-933566-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x5 over 14s)  kubelet          Node multinode-933566-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11s                kubelet          Node multinode-933566-m02 status is now: NodeReady
	  Normal  RegisteredNode           10s                node-controller  Node multinode-933566-m02 event: Registered Node multinode-933566-m02 in Controller
	
	
	==> dmesg <==
	[  +0.001079] FS-Cache: O-key=[8] 'a070ed0000000000'
	[  +0.000709] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000b771cf9b
	[  +0.001160] FS-Cache: N-key=[8] 'a070ed0000000000'
	[  +0.005206] FS-Cache: Duplicate cookie detected
	[  +0.000737] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000938] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=000000007ac10e0d
	[  +0.001142] FS-Cache: O-key=[8] 'a070ed0000000000'
	[  +0.000710] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000958] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000372890f4
	[  +0.001099] FS-Cache: N-key=[8] 'a070ed0000000000'
	[  +2.042043] FS-Cache: Duplicate cookie detected
	[  +0.000813] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001067] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000126b129b
	[  +0.001188] FS-Cache: O-key=[8] '9f70ed0000000000'
	[  +0.000740] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000b771cf9b
	[  +0.001256] FS-Cache: N-key=[8] '9f70ed0000000000'
	[  +0.329505] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001108] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=0000000009be8c6c
	[  +0.001149] FS-Cache: O-key=[8] 'a570ed0000000000'
	[  +0.000824] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=000000006cd4597f
	[  +0.001205] FS-Cache: N-key=[8] 'a570ed0000000000'
	
	
	==> etcd [50a403a1be15f02f4ab2ca6f780ce6842ad2e3affa34620d92be105c62155d3b] <==
	{"level":"info","ts":"2024-01-08T20:34:06.837206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-08T20:34:06.837327Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-08T20:34:06.838587Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T20:34:06.838747Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T20:34:06.838872Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T20:34:06.839487Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T20:34:06.839468Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T20:34:07.109117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T20:34:07.109246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T20:34:07.109287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-08T20:34:07.10936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:34:07.109393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T20:34:07.109444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T20:34:07.109477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T20:34:07.114654Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-933566 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:34:07.114749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:34:07.115911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:34:07.116042Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:34:07.116244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:34:07.117122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-08T20:34:07.117641Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:34:07.117693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:34:07.117906Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:34:07.118011Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:34:07.118075Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:35:26 up  3:17,  0 users,  load average: 0.93, 1.30, 1.33
	Linux multinode-933566 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [86cd01d41d2b8fc210a3bfbedebdd8b80d30169fa252302b880f763b67bf8059] <==
	I0108 20:34:27.151248       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 20:34:27.153242       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0108 20:34:27.153436       1 main.go:116] setting mtu 1500 for CNI 
	I0108 20:34:27.153480       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 20:34:27.153525       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 20:34:57.544817       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0108 20:34:57.557943       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:34:57.557979       1 main.go:227] handling current node
	I0108 20:35:07.574735       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:35:07.574856       1 main.go:227] handling current node
	I0108 20:35:17.588204       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:35:17.588236       1 main.go:227] handling current node
	I0108 20:35:17.588248       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 20:35:17.588254       1 main.go:250] Node multinode-933566-m02 has CIDR [10.244.1.0/24] 
	I0108 20:35:17.588397       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [06e3ce0f154f824bb05b8f6c6d05bf3a10e00457044fe56794b850c7a00b48c5] <==
	I0108 20:34:10.811803       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 20:34:10.816763       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 20:34:10.817590       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 20:34:10.817707       1 aggregator.go:166] initial CRD sync complete...
	I0108 20:34:10.817742       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 20:34:10.817772       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 20:34:10.817805       1 cache.go:39] Caches are synced for autoregister controller
	E0108 20:34:10.845302       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0108 20:34:11.048151       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:34:11.515878       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 20:34:11.522068       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 20:34:11.522096       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 20:34:12.021527       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:34:12.068536       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 20:34:12.162517       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 20:34:12.168635       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0108 20:34:12.169667       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 20:34:12.173836       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 20:34:12.729619       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 20:34:13.800231       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 20:34:13.812306       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 20:34:13.840361       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 20:34:26.432978       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 20:34:26.540175       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0108 20:35:22.213666       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4008d98840), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x4008824870), ResponseWriter:(*httpsnoop.rw)(0x4008824870), Flusher:(*httpsnoop.rw)(0x4008824870), CloseNotifier:(*httpsnoop.rw)(0x4008824870), Pusher:(*httpsnoop.rw)(0x4008824870)}}, encoder:(*versioning.codec)(0x400b09ab40), memAllocator:(*runtime.Allocator)(0x4009c887e0)})
	
	
	==> kube-controller-manager [a6da31534f960e9124156feb3561932151542920b443baa0223ae3597afd8153] <==
	I0108 20:34:27.047828       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="178.053µs"
	I0108 20:34:57.900554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.155µs"
	I0108 20:34:57.945696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="135.197µs"
	I0108 20:34:59.141562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.985µs"
	I0108 20:34:59.176317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.187088ms"
	I0108 20:34:59.176676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.849µs"
	I0108 20:35:01.348986       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0108 20:35:13.654167       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-933566-m02\" does not exist"
	I0108 20:35:13.667464       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-933566-m02" podCIDRs=["10.244.1.0/24"]
	I0108 20:35:13.688746       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ffphz"
	I0108 20:35:13.688854       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-scpd5"
	I0108 20:35:15.532436       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-933566-m02"
	I0108 20:35:16.351406       1 event.go:307] "Event occurred" object="multinode-933566-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-933566-m02 event: Registered Node multinode-933566-m02 in Controller"
	I0108 20:35:16.351520       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-933566-m02"
	I0108 20:35:18.328742       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 20:35:18.336713       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-zsk76"
	I0108 20:35:18.348118       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lxnll"
	I0108 20:35:18.373109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="45.062528ms"
	I0108 20:35:18.389699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.446836ms"
	I0108 20:35:18.389849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.634µs"
	I0108 20:35:18.395456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.818µs"
	I0108 20:35:21.200654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.9381ms"
	I0108 20:35:21.200886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.176µs"
	I0108 20:35:22.204571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.350316ms"
	I0108 20:35:22.204973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.95µs"
	
	
	==> kube-proxy [3ac6c1baba456390653682d27d1e3c302b54bf9b5857c7a505e8893b1f948d74] <==
	I0108 20:34:27.923946       1 server_others.go:69] "Using iptables proxy"
	I0108 20:34:27.938955       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0108 20:34:27.962739       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:34:27.964677       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:34:27.964709       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:34:27.964724       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:34:27.964797       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:34:27.965022       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:34:27.965039       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:34:27.967542       1 config.go:188] "Starting service config controller"
	I0108 20:34:27.967567       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:34:27.967601       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:34:27.967605       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:34:27.968399       1 config.go:315] "Starting node config controller"
	I0108 20:34:27.968416       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:34:28.067738       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:34:28.067745       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:34:28.069206       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a452540f453c0549a126caa3bc3aa358c6e7487eb9938a6c448a178ca9050c31] <==
	W0108 20:34:10.798202       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:34:10.798219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 20:34:10.798284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:34:10.798299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 20:34:10.798342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:34:10.798357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:34:10.798394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:34:10.798408       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:34:10.798466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:34:10.798482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 20:34:10.798524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:34:10.798538       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:34:10.798621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:34:10.798676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:34:11.633977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:34:11.634103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:34:11.667208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:34:11.667324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:34:11.692959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:34:11.693083       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:34:11.842169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:34:11.842203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:34:12.072636       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:34:12.072768       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 20:34:13.761712       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 20:34:26 multinode-933566 kubelet[1385]: I0108 20:34:26.617166    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3c01562-5fb4-40e3-81c7-53be33365b5e-xtables-lock\") pod \"kindnet-7wmrt\" (UID: \"a3c01562-5fb4-40e3-81c7-53be33365b5e\") " pod="kube-system/kindnet-7wmrt"
	Jan 08 20:34:26 multinode-933566 kubelet[1385]: I0108 20:34:26.617191    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0d75bb-8b31-4b55-8972-26dc6c5debb7-xtables-lock\") pod \"kube-proxy-lljgl\" (UID: \"7c0d75bb-8b31-4b55-8972-26dc6c5debb7\") " pod="kube-system/kube-proxy-lljgl"
	Jan 08 20:34:26 multinode-933566 kubelet[1385]: I0108 20:34:26.617216    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a3c01562-5fb4-40e3-81c7-53be33365b5e-cni-cfg\") pod \"kindnet-7wmrt\" (UID: \"a3c01562-5fb4-40e3-81c7-53be33365b5e\") " pod="kube-system/kindnet-7wmrt"
	Jan 08 20:34:26 multinode-933566 kubelet[1385]: I0108 20:34:26.617237    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3c01562-5fb4-40e3-81c7-53be33365b5e-lib-modules\") pod \"kindnet-7wmrt\" (UID: \"a3c01562-5fb4-40e3-81c7-53be33365b5e\") " pod="kube-system/kindnet-7wmrt"
	Jan 08 20:34:26 multinode-933566 kubelet[1385]: I0108 20:34:26.617262    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0d75bb-8b31-4b55-8972-26dc6c5debb7-lib-modules\") pod \"kube-proxy-lljgl\" (UID: \"7c0d75bb-8b31-4b55-8972-26dc6c5debb7\") " pod="kube-system/kube-proxy-lljgl"
	Jan 08 20:34:26 multinode-933566 kubelet[1385]: I0108 20:34:26.617288    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwh5q\" (UniqueName: \"kubernetes.io/projected/7c0d75bb-8b31-4b55-8972-26dc6c5debb7-kube-api-access-fwh5q\") pod \"kube-proxy-lljgl\" (UID: \"7c0d75bb-8b31-4b55-8972-26dc6c5debb7\") " pod="kube-system/kube-proxy-lljgl"
	Jan 08 20:34:26 multinode-933566 kubelet[1385]: I0108 20:34:26.617316    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn76d\" (UniqueName: \"kubernetes.io/projected/a3c01562-5fb4-40e3-81c7-53be33365b5e-kube-api-access-dn76d\") pod \"kindnet-7wmrt\" (UID: \"a3c01562-5fb4-40e3-81c7-53be33365b5e\") " pod="kube-system/kindnet-7wmrt"
	Jan 08 20:34:27 multinode-933566 kubelet[1385]: W0108 20:34:27.787652    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/crio-9980bba821af1dd3a4891a9d079b8580eb4c28b29cf36b6b134459ed55881a5f WatchSource:0}: Error finding container 9980bba821af1dd3a4891a9d079b8580eb4c28b29cf36b6b134459ed55881a5f: Status 404 returned error can't find the container with id 9980bba821af1dd3a4891a9d079b8580eb4c28b29cf36b6b134459ed55881a5f
	Jan 08 20:34:28 multinode-933566 kubelet[1385]: I0108 20:34:28.106540    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-7wmrt" podStartSLOduration=2.106494831 podCreationTimestamp="2024-01-08 20:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:34:28.088141649 +0000 UTC m=+14.310125240" watchObservedRunningTime="2024-01-08 20:34:28.106494831 +0000 UTC m=+14.328478431"
	Jan 08 20:34:34 multinode-933566 kubelet[1385]: I0108 20:34:34.000424    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lljgl" podStartSLOduration=8.000381072 podCreationTimestamp="2024-01-08 20:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:34:28.107296824 +0000 UTC m=+14.329280415" watchObservedRunningTime="2024-01-08 20:34:34.000381072 +0000 UTC m=+20.222364663"
	Jan 08 20:34:57 multinode-933566 kubelet[1385]: I0108 20:34:57.871208    1385 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 20:34:57 multinode-933566 kubelet[1385]: I0108 20:34:57.898173    1385 topology_manager.go:215] "Topology Admit Handler" podUID="dfb7da4b-0626-4cd3-accf-49736fec486b" podNamespace="kube-system" podName="coredns-5dd5756b68-2945x"
	Jan 08 20:34:57 multinode-933566 kubelet[1385]: I0108 20:34:57.899238    1385 topology_manager.go:215] "Topology Admit Handler" podUID="b11f34a8-ca65-4977-b11c-a1d51dcb66e6" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 20:34:57 multinode-933566 kubelet[1385]: I0108 20:34:57.954107    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfb7da4b-0626-4cd3-accf-49736fec486b-config-volume\") pod \"coredns-5dd5756b68-2945x\" (UID: \"dfb7da4b-0626-4cd3-accf-49736fec486b\") " pod="kube-system/coredns-5dd5756b68-2945x"
	Jan 08 20:34:57 multinode-933566 kubelet[1385]: I0108 20:34:57.954172    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b11f34a8-ca65-4977-b11c-a1d51dcb66e6-tmp\") pod \"storage-provisioner\" (UID: \"b11f34a8-ca65-4977-b11c-a1d51dcb66e6\") " pod="kube-system/storage-provisioner"
	Jan 08 20:34:57 multinode-933566 kubelet[1385]: I0108 20:34:57.954199    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fvs\" (UniqueName: \"kubernetes.io/projected/b11f34a8-ca65-4977-b11c-a1d51dcb66e6-kube-api-access-t7fvs\") pod \"storage-provisioner\" (UID: \"b11f34a8-ca65-4977-b11c-a1d51dcb66e6\") " pod="kube-system/storage-provisioner"
	Jan 08 20:34:57 multinode-933566 kubelet[1385]: I0108 20:34:57.954228    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2t7j\" (UniqueName: \"kubernetes.io/projected/dfb7da4b-0626-4cd3-accf-49736fec486b-kube-api-access-b2t7j\") pod \"coredns-5dd5756b68-2945x\" (UID: \"dfb7da4b-0626-4cd3-accf-49736fec486b\") " pod="kube-system/coredns-5dd5756b68-2945x"
	Jan 08 20:34:58 multinode-933566 kubelet[1385]: W0108 20:34:58.232534    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/crio-24604e553b0ebdf7e5fa2b7aac01fd50eb8394fa396aaddb5958ed7293512df4 WatchSource:0}: Error finding container 24604e553b0ebdf7e5fa2b7aac01fd50eb8394fa396aaddb5958ed7293512df4: Status 404 returned error can't find the container with id 24604e553b0ebdf7e5fa2b7aac01fd50eb8394fa396aaddb5958ed7293512df4
	Jan 08 20:34:58 multinode-933566 kubelet[1385]: W0108 20:34:58.252575    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/crio-a6065585b7906af0fc3e3726e1eec4a2435e76cc98a7256766164899450ac4e8 WatchSource:0}: Error finding container a6065585b7906af0fc3e3726e1eec4a2435e76cc98a7256766164899450ac4e8: Status 404 returned error can't find the container with id a6065585b7906af0fc3e3726e1eec4a2435e76cc98a7256766164899450ac4e8
	Jan 08 20:34:59 multinode-933566 kubelet[1385]: I0108 20:34:59.139132    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2945x" podStartSLOduration=33.139092177 podCreationTimestamp="2024-01-08 20:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:34:59.138617071 +0000 UTC m=+45.360600662" watchObservedRunningTime="2024-01-08 20:34:59.139092177 +0000 UTC m=+45.361075760"
	Jan 08 20:35:18 multinode-933566 kubelet[1385]: I0108 20:35:18.363676    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=51.363634074 podCreationTimestamp="2024-01-08 20:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:34:59.184198668 +0000 UTC m=+45.406182259" watchObservedRunningTime="2024-01-08 20:35:18.363634074 +0000 UTC m=+64.585617657"
	Jan 08 20:35:18 multinode-933566 kubelet[1385]: I0108 20:35:18.363858    1385 topology_manager.go:215] "Topology Admit Handler" podUID="ea67277c-eed7-4456-8c66-ed5772668c58" podNamespace="default" podName="busybox-5bc68d56bd-lxnll"
	Jan 08 20:35:18 multinode-933566 kubelet[1385]: W0108 20:35:18.371906    1385 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-933566" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-933566' and this object
	Jan 08 20:35:18 multinode-933566 kubelet[1385]: E0108 20:35:18.371950    1385 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-933566" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-933566' and this object
	Jan 08 20:35:18 multinode-933566 kubelet[1385]: I0108 20:35:18.401050    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdh5\" (UniqueName: \"kubernetes.io/projected/ea67277c-eed7-4456-8c66-ed5772668c58-kube-api-access-8xdh5\") pod \"busybox-5bc68d56bd-lxnll\" (UID: \"ea67277c-eed7-4456-8c66-ed5772668c58\") " pod="default/busybox-5bc68d56bd-lxnll"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-933566 -n multinode-933566
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-933566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1741039656.exe start -p running-upgrade-282391 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0108 20:51:26.377831  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1741039656.exe start -p running-upgrade-282391 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m7.888714351s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-282391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-282391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.313937821s)

                                                
                                                
-- stdout --
	* [running-upgrade-282391] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-282391 in cluster running-upgrade-282391
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Updating the running docker "running-upgrade-282391" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:52:27.029376  763597 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:52:27.029591  763597 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:52:27.029622  763597 out.go:309] Setting ErrFile to fd 2...
	I0108 20:52:27.029644  763597 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:52:27.029920  763597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:52:27.030339  763597 out.go:303] Setting JSON to false
	I0108 20:52:27.031719  763597 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12889,"bootTime":1704734258,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:52:27.031851  763597 start.go:138] virtualization:  
	I0108 20:52:27.037367  763597 out.go:177] * [running-upgrade-282391] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:52:27.039288  763597 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:52:27.039501  763597 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0108 20:52:27.039540  763597 notify.go:220] Checking for updates...
	I0108 20:52:27.042621  763597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:52:27.048005  763597 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:52:27.049996  763597 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:52:27.053440  763597 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:52:27.057433  763597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:52:27.060036  763597 config.go:182] Loaded profile config "running-upgrade-282391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 20:52:27.062699  763597 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 20:52:27.066187  763597 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:52:27.093118  763597 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:52:27.093331  763597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:52:27.211144  763597 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 20:52:27.195847471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:52:27.211250  763597 docker.go:295] overlay module found
	I0108 20:52:27.214658  763597 out.go:177] * Using the docker driver based on existing profile
	I0108 20:52:27.217075  763597 start.go:298] selected driver: docker
	I0108 20:52:27.217098  763597 start.go:902] validating driver "docker" against &{Name:running-upgrade-282391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-282391 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.24 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 20:52:27.217188  763597 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:52:27.217848  763597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:52:27.243046  763597 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0108 20:52:27.296615  763597 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 20:52:27.287265414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:52:27.297006  763597 cni.go:84] Creating CNI manager for ""
	I0108 20:52:27.297024  763597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:52:27.297037  763597 start_flags.go:323] config:
	{Name:running-upgrade-282391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-282391 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.24 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 20:52:27.300018  763597 out.go:177] * Starting control plane node running-upgrade-282391 in cluster running-upgrade-282391
	I0108 20:52:27.302124  763597 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:52:27.304006  763597 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:52:27.305902  763597 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0108 20:52:27.305989  763597 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0108 20:52:27.324357  763597 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0108 20:52:27.324378  763597 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0108 20:52:27.373998  763597 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0108 20:52:27.374193  763597 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/running-upgrade-282391/config.json ...
	I0108 20:52:27.374491  763597 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:52:27.374648  763597 cache.go:107] acquiring lock: {Name:mk3c8286e2cc2bf23333f2fde93bbbffaca2d67d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.374731  763597 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 20:52:27.374743  763597 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.873µs
	I0108 20:52:27.374754  763597 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 20:52:27.374776  763597 cache.go:107] acquiring lock: {Name:mk44cb6b843ba721f847f64865744c5f7915221a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.374812  763597 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0108 20:52:27.374821  763597 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 53.416µs
	I0108 20:52:27.374828  763597 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0108 20:52:27.374837  763597 cache.go:107] acquiring lock: {Name:mkdf84d353c206e379592d67df524a8e57bb96f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.374866  763597 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0108 20:52:27.374874  763597 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 38.745µs
	I0108 20:52:27.374882  763597 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0108 20:52:27.374890  763597 cache.go:107] acquiring lock: {Name:mk1f9dd73c9040b4843877ea6d579cd2a6afc14d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.374914  763597 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0108 20:52:27.374919  763597 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 30.096µs
	I0108 20:52:27.374925  763597 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0108 20:52:27.374933  763597 cache.go:107] acquiring lock: {Name:mkc0e5eb4ee5b95208370bf9ab86e472522e23cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.374965  763597 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0108 20:52:27.374973  763597 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 40.846µs
	I0108 20:52:27.374984  763597 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0108 20:52:27.374995  763597 cache.go:107] acquiring lock: {Name:mk94eab46ef127117a2ac55cb5fea6764e134f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.375026  763597 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0108 20:52:27.375034  763597 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 38.351µs
	I0108 20:52:27.375044  763597 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0108 20:52:27.375063  763597 cache.go:107] acquiring lock: {Name:mke00c1fa35ee123cfffa38e119041010daad15e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.375093  763597 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0108 20:52:27.375100  763597 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 38.416µs
	I0108 20:52:27.375106  763597 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0108 20:52:27.375114  763597 cache.go:107] acquiring lock: {Name:mke5483038fdde0966ca33aae1d2ab3eafd4be68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.375144  763597 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0108 20:52:27.375152  763597 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 38.975µs
	I0108 20:52:27.375158  763597 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0108 20:52:27.375163  763597 cache.go:87] Successfully saved all images to host disk.
	I0108 20:52:27.375221  763597 start.go:365] acquiring machines lock for running-upgrade-282391: {Name:mkfbd60fcbe499d4b1de5f3bd09b7282d2747f06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:52:27.375277  763597 start.go:369] acquired machines lock for "running-upgrade-282391" in 38.811µs
	I0108 20:52:27.375294  763597 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:52:27.375300  763597 fix.go:54] fixHost starting: 
	I0108 20:52:27.375637  763597 cli_runner.go:164] Run: docker container inspect running-upgrade-282391 --format={{.State.Status}}
	I0108 20:52:27.393855  763597 fix.go:102] recreateIfNeeded on running-upgrade-282391: state=Running err=<nil>
	W0108 20:52:27.393888  763597 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:52:27.397466  763597 out.go:177] * Updating the running docker "running-upgrade-282391" container ...
	I0108 20:52:27.399502  763597 machine.go:88] provisioning docker machine ...
	I0108 20:52:27.399542  763597 ubuntu.go:169] provisioning hostname "running-upgrade-282391"
	I0108 20:52:27.399616  763597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-282391
	I0108 20:52:27.417474  763597 main.go:141] libmachine: Using SSH client type: native
	I0108 20:52:27.417930  763597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33590 <nil> <nil>}
	I0108 20:52:27.417951  763597 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-282391 && echo "running-upgrade-282391" | sudo tee /etc/hostname
	I0108 20:52:27.570482  763597 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-282391
	
	I0108 20:52:27.570561  763597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-282391
	I0108 20:52:27.589380  763597 main.go:141] libmachine: Using SSH client type: native
	I0108 20:52:27.589776  763597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33590 <nil> <nil>}
	I0108 20:52:27.589808  763597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-282391' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-282391/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-282391' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:52:27.739526  763597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:52:27.739551  763597 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-633350/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-633350/.minikube}
	I0108 20:52:27.739573  763597 ubuntu.go:177] setting up certificates
	I0108 20:52:27.739587  763597 provision.go:83] configureAuth start
	I0108 20:52:27.739658  763597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-282391
	I0108 20:52:27.757389  763597 provision.go:138] copyHostCerts
	I0108 20:52:27.757470  763597 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem, removing ...
	I0108 20:52:27.757484  763597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:52:27.757561  763597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem (1679 bytes)
	I0108 20:52:27.757657  763597 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem, removing ...
	I0108 20:52:27.757671  763597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:52:27.757706  763597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem (1082 bytes)
	I0108 20:52:27.757764  763597 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem, removing ...
	I0108 20:52:27.757774  763597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:52:27.757799  763597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem (1123 bytes)
	I0108 20:52:27.757848  763597 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-282391 san=[192.168.70.24 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-282391]
	I0108 20:52:28.358207  763597 provision.go:172] copyRemoteCerts
	I0108 20:52:28.358304  763597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:52:28.358353  763597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-282391
	I0108 20:52:28.377435  763597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33590 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/running-upgrade-282391/id_rsa Username:docker}
	I0108 20:52:28.476631  763597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:52:28.503974  763597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 20:52:28.527710  763597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:52:28.552867  763597 provision.go:86] duration metric: configureAuth took 813.260304ms
	I0108 20:52:28.552894  763597 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:52:28.553100  763597 config.go:182] Loaded profile config "running-upgrade-282391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 20:52:28.553228  763597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-282391
	I0108 20:52:28.577447  763597 main.go:141] libmachine: Using SSH client type: native
	I0108 20:52:28.577853  763597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33590 <nil> <nil>}
	I0108 20:52:28.577872  763597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:52:29.201111  763597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:52:29.201132  763597 machine.go:91] provisioned docker machine in 1.8016128s
	I0108 20:52:29.201142  763597 start.go:300] post-start starting for "running-upgrade-282391" (driver="docker")
	I0108 20:52:29.201153  763597 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:52:29.201220  763597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:52:29.201306  763597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-282391
	I0108 20:52:29.233909  763597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33590 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/running-upgrade-282391/id_rsa Username:docker}
	I0108 20:52:29.342430  763597 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:52:29.346379  763597 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:52:29.346400  763597 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:52:29.346411  763597 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:52:29.346417  763597 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0108 20:52:29.346427  763597 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/addons for local assets ...
	I0108 20:52:29.346631  763597 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/files for local assets ...
	I0108 20:52:29.346730  763597 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> 6387322.pem in /etc/ssl/certs
	I0108 20:52:29.346846  763597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:52:29.360573  763597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:52:29.447599  763597 start.go:303] post-start completed in 246.439973ms
	I0108 20:52:29.447698  763597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:52:29.447757  763597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-282391
	I0108 20:52:29.472568  763597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33590 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/running-upgrade-282391/id_rsa Username:docker}
	I0108 20:52:29.594643  763597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:52:29.607326  763597 fix.go:56] fixHost completed within 2.232018441s
	I0108 20:52:29.607349  763597 start.go:83] releasing machines lock for "running-upgrade-282391", held for 2.232060484s
	I0108 20:52:29.607433  763597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-282391
	I0108 20:52:29.637121  763597 ssh_runner.go:195] Run: cat /version.json
	I0108 20:52:29.637174  763597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-282391
	I0108 20:52:29.637399  763597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:52:29.637437  763597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-282391
	I0108 20:52:29.667947  763597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33590 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/running-upgrade-282391/id_rsa Username:docker}
	I0108 20:52:29.675775  763597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33590 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/running-upgrade-282391/id_rsa Username:docker}
	W0108 20:52:29.871324  763597 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 20:52:29.871520  763597 ssh_runner.go:195] Run: systemctl --version
	I0108 20:52:29.876748  763597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:52:30.091717  763597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:52:30.098617  763597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:52:30.128997  763597 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:52:30.129161  763597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:52:30.160267  763597 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:52:30.160291  763597 start.go:475] detecting cgroup driver to use...
	I0108 20:52:30.160360  763597 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:52:30.160430  763597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	W0108 20:52:30.192765  763597 cruntime.go:290] disable failed: sudo systemctl stop -f containerd: Process exited with status 1
	stdout:
	
	stderr:
	Job for containerd.service canceled.
	I0108 20:52:30.192872  763597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	W0108 20:52:30.208999  763597 crio.go:202] disableOthers: containerd is still active
	I0108 20:52:30.209179  763597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:52:30.231493  763597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 20:52:30.231591  763597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:52:30.267439  763597 out.go:177] 
	W0108 20:52:30.269577  763597 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 20:52:30.269599  763597 out.go:239] * 
	* 
	W0108 20:52:30.270810  763597 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 20:52:30.273407  763597 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-282391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 20:52:30.298000166 +0000 UTC m=+2580.672871331
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-282391
helpers_test.go:235: (dbg) docker inspect running-upgrade-282391:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46f3d962f3cf9ae24ef9aa24beb533a9529df0fd22eb99af6b56ce0cbd577b09",
	        "Created": "2024-01-08T20:51:43.043688822Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 760267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:51:43.447694915Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/46f3d962f3cf9ae24ef9aa24beb533a9529df0fd22eb99af6b56ce0cbd577b09/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46f3d962f3cf9ae24ef9aa24beb533a9529df0fd22eb99af6b56ce0cbd577b09/hostname",
	        "HostsPath": "/var/lib/docker/containers/46f3d962f3cf9ae24ef9aa24beb533a9529df0fd22eb99af6b56ce0cbd577b09/hosts",
	        "LogPath": "/var/lib/docker/containers/46f3d962f3cf9ae24ef9aa24beb533a9529df0fd22eb99af6b56ce0cbd577b09/46f3d962f3cf9ae24ef9aa24beb533a9529df0fd22eb99af6b56ce0cbd577b09-json.log",
	        "Name": "/running-upgrade-282391",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-282391:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-282391",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0b52256ea2ecac28dff426738fd1a51973b009875a595f1ed15aef63699d991c-init/diff:/var/lib/docker/overlay2/7bd7f7d4f7a96e360ebc178b00c82173ead4fb4a7e97b613498165aac8813ecf/diff:/var/lib/docker/overlay2/8180366f4c6b833bcf9b4327f9b057ee87e978a286e144df96fc760862654ace/diff:/var/lib/docker/overlay2/112ef20a8443a4a561aa58e38019216936a3ad7223ac66077ee7d10eacc016d6/diff:/var/lib/docker/overlay2/d4103d566aeef7a7c2040d09072e1446c0e813cdc13b758f9cac63aae801baa1/diff:/var/lib/docker/overlay2/35572e893fe9d0ae80de57760a5cb0035d2935e15fa65803b1354b0bb610627c/diff:/var/lib/docker/overlay2/86b1a16d14129e584758e0b2290c1cdc8dc7e82fa05237ab760c8bf16de51c1b/diff:/var/lib/docker/overlay2/a3769e472e33900fa6c426d1e8fde102b94a3d253e842f76f3e72a1309dde2cb/diff:/var/lib/docker/overlay2/a18991605fe2bf8b79707e46faf9cd37b2890f8a0bea7f3d2f91668ef93874c0/diff:/var/lib/docker/overlay2/6b841945f670410ce69847cb472903ebab88dac748da1dca4c062587d8f0ccac/diff:/var/lib/docker/overlay2/0dad9a
00f32482e5a12c2e70624ec9236d3356b3863fe3c4e53e7cee885f6b93/diff:/var/lib/docker/overlay2/094f41a465af910b5e8ce6181ce0d9f06fc3b70ced5f3383fff3811b8426f1e1/diff:/var/lib/docker/overlay2/835d5068609467475f8db30db20f113e109e190d8d62d7d8cea5588bd8c1c08d/diff:/var/lib/docker/overlay2/99855b5a99e91577496bc7687590efa58ae096fc883260fe9abf28622b9bded4/diff:/var/lib/docker/overlay2/a473c76d569c986924bff7cf42823b9de18f083dabd5c87feca03b6e1b558d56/diff:/var/lib/docker/overlay2/2cb2b315f62d4b1442c38474fa8cb730bf1a0805e75cdeebdc206c689901ab1d/diff:/var/lib/docker/overlay2/e2f15de7c17e9282dd753e0b57063ac6ec084da3d4cd45a56aac2495842a263f/diff:/var/lib/docker/overlay2/230acaf72a082251ba308b271ced726b3bddafb8fe65d09f3f99664aa7c51d6e/diff:/var/lib/docker/overlay2/0a5ced5ab52b718b50d00a8ba29367139fade678014ffe35be2159ffd6153a43/diff:/var/lib/docker/overlay2/b00ac83a30ab80b3da73206553aa4bbaaa83d5e5b0caee3a8380cd0cd0680f47/diff:/var/lib/docker/overlay2/c7df2ed36ebf73b6aaaae2e85d74525d2241c688bce62a9e35ecc3ee2978643c/diff:/var/lib/d
ocker/overlay2/34ec4f23c36dbb11e530f4b1c41ba722f3cb9e42408e36e4bbb7201b6b92e8a3/diff:/var/lib/docker/overlay2/c254df4f1ca37631da330237a0b14a97899443adc3d1ad0464fd53647495697d/diff:/var/lib/docker/overlay2/93d5a29e06840eb21aa8205170b35da7878e8ad5cc26f14284e3e9cbbc81e29b/diff:/var/lib/docker/overlay2/15edb436f7a6f17bfdbdd3b9c20158148f279bede62e0158bfbc0b3cd0fc67a8/diff:/var/lib/docker/overlay2/ea0e01e19f2669c3f2bd2e74af6cac307b886b6460dce465b0af91d2df65be2b/diff:/var/lib/docker/overlay2/9b02dd226da96954107ff3eecafc3da5c10ef298f98e56b6db7c96f356ef376d/diff:/var/lib/docker/overlay2/43115b34af7cd2f094683526712a0004c8a7cff9cd349cb02bb15f483baa9183/diff:/var/lib/docker/overlay2/35f542e833800a7b5faaf289f862fffbeaed8560bff7cfd325ff8dd8766020fd/diff:/var/lib/docker/overlay2/995aed6fb219dff8553edf4a1429d254dcd808bed821cbb23a6d9d5fab1a4be7/diff:/var/lib/docker/overlay2/ecef601f547d527773e5f16c9c909e0a665512ec2b7c55972feea8c77ceb23a9/diff:/var/lib/docker/overlay2/df205efa3222d274da18e41f9f8a6b757548e13cc64c553f396e0cdaf49
e8435/diff:/var/lib/docker/overlay2/754fd26fd5ed4dcbe16a301f62e223982efee49bd398f0d8c5cf551944e80848/diff:/var/lib/docker/overlay2/8acda0b26743e7e9aad8901f1ebf9614ffc34dfb2be4a2ec5f0f3833b02ac9fc/diff:/var/lib/docker/overlay2/0c7b2c530e7c4ade75d4c066fc3e35723f9325143f6e92f10231b70a070135ea/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b52256ea2ecac28dff426738fd1a51973b009875a595f1ed15aef63699d991c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b52256ea2ecac28dff426738fd1a51973b009875a595f1ed15aef63699d991c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b52256ea2ecac28dff426738fd1a51973b009875a595f1ed15aef63699d991c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-282391",
	                "Source": "/var/lib/docker/volumes/running-upgrade-282391/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-282391",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-282391",
	                "name.minikube.sigs.k8s.io": "running-upgrade-282391",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67c20d8cd6bdb9f5fd0a3004229333ed0ec51f83476555b14ce6b81d647614c6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33590"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33589"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33588"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33587"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67c20d8cd6bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-282391": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.24"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "46f3d962f3cf",
	                        "running-upgrade-282391"
	                    ],
	                    "NetworkID": "eb92994dfcd84f389c7112623601673b5d216f22f603594fa15e3f8b1b8c9250",
	                    "EndpointID": "756a11a395e3fffcb2b0303ec39f54f0bb48b9595548762d9578b0ea10b2b579",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.24",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:18",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-282391 -n running-upgrade-282391
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-282391 -n running-upgrade-282391: exit status 4 (516.353484ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:52:30.761150  764220 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-282391" does not appear in /home/jenkins/minikube-integration/17907-633350/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-282391" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-282391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-282391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-282391: (2.895696489s)
--- FAIL: TestRunningBinaryUpgrade (76.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (188.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.3176504561.exe start -p missing-upgrade-759449 --memory=2200 --driver=docker  --container-runtime=crio
E0108 20:46:26.378638  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.3176504561.exe start -p missing-upgrade-759449 --memory=2200 --driver=docker  --container-runtime=crio: (2m12.717899953s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-759449
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-759449: (10.385722041s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-759449
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-759449 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-759449 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (41.903296124s)

                                                
                                                
-- stdout --
	* [missing-upgrade-759449] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-759449 in cluster missing-upgrade-759449
	* Pulling base image v0.0.42-1703498848-17857 ...
	* docker "missing-upgrade-759449" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:48:43.374020  749885 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:48:43.374142  749885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:48:43.374177  749885 out.go:309] Setting ErrFile to fd 2...
	I0108 20:48:43.374192  749885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:48:43.374833  749885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:48:43.375264  749885 out.go:303] Setting JSON to false
	I0108 20:48:43.376137  749885 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12666,"bootTime":1704734258,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:48:43.376212  749885 start.go:138] virtualization:  
	I0108 20:48:43.380420  749885 out.go:177] * [missing-upgrade-759449] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:48:43.384027  749885 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:48:43.384075  749885 notify.go:220] Checking for updates...
	I0108 20:48:43.387353  749885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:48:43.389551  749885 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:48:43.391937  749885 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:48:43.394249  749885 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:48:43.396552  749885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:48:43.399231  749885 config.go:182] Loaded profile config "missing-upgrade-759449": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 20:48:43.402020  749885 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 20:48:43.404445  749885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:48:43.447513  749885 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:48:43.447658  749885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:48:43.562015  749885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 20:48:43.552255733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:48:43.562115  749885 docker.go:295] overlay module found
	I0108 20:48:43.565066  749885 out.go:177] * Using the docker driver based on existing profile
	I0108 20:48:43.567959  749885 start.go:298] selected driver: docker
	I0108 20:48:43.567980  749885 start.go:902] validating driver "docker" against &{Name:missing-upgrade-759449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-759449 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.154 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 20:48:43.568080  749885 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:48:43.568710  749885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:48:43.685699  749885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 20:48:43.673460524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:48:43.686061  749885 cni.go:84] Creating CNI manager for ""
	I0108 20:48:43.686080  749885 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:48:43.686100  749885 start_flags.go:323] config:
	{Name:missing-upgrade-759449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-759449 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.154 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 20:48:43.688875  749885 out.go:177] * Starting control plane node missing-upgrade-759449 in cluster missing-upgrade-759449
	I0108 20:48:43.690975  749885 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:48:43.693467  749885 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:48:43.696358  749885 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0108 20:48:43.696439  749885 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0108 20:48:43.722847  749885 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0108 20:48:43.723018  749885 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0108 20:48:43.723540  749885 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0108 20:48:43.771671  749885 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0108 20:48:43.771809  749885 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/missing-upgrade-759449/config.json ...
	I0108 20:48:43.771909  749885 cache.go:107] acquiring lock: {Name:mk3c8286e2cc2bf23333f2fde93bbbffaca2d67d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:43.772013  749885 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 20:48:43.772032  749885 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 135.968µs
	I0108 20:48:43.772044  749885 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 20:48:43.772062  749885 cache.go:107] acquiring lock: {Name:mk44cb6b843ba721f847f64865744c5f7915221a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:43.772072  749885 cache.go:107] acquiring lock: {Name:mkc0e5eb4ee5b95208370bf9ab86e472522e23cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:43.772161  749885 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0108 20:48:43.772174  749885 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0108 20:48:43.772335  749885 cache.go:107] acquiring lock: {Name:mk94eab46ef127117a2ac55cb5fea6764e134f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:43.772435  749885 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 20:48:43.772559  749885 cache.go:107] acquiring lock: {Name:mke00c1fa35ee123cfffa38e119041010daad15e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:43.772505  749885 cache.go:107] acquiring lock: {Name:mkdf84d353c206e379592d67df524a8e57bb96f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:43.772698  749885 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0108 20:48:43.772750  749885 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0108 20:48:43.772847  749885 cache.go:107] acquiring lock: {Name:mke5483038fdde0966ca33aae1d2ab3eafd4be68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:43.772953  749885 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0108 20:48:43.773117  749885 cache.go:107] acquiring lock: {Name:mk1f9dd73c9040b4843877ea6d579cd2a6afc14d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:43.773775  749885 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0108 20:48:43.774187  749885 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0108 20:48:43.773840  749885 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0108 20:48:43.774955  749885 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0108 20:48:43.773886  749885 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0108 20:48:43.775631  749885 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0108 20:48:43.773924  749885 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 20:48:43.773990  749885 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0108 20:48:44.139662  749885 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I0108 20:48:44.150458  749885 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0108 20:48:44.164467  749885 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0108 20:48:44.164538  749885 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0108 20:48:44.166335  749885 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W0108 20:48:44.175218  749885 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0108 20:48:44.175283  749885 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W0108 20:48:44.183143  749885 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0108 20:48:44.183188  749885 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0108 20:48:44.194063  749885 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I0108 20:48:44.265416  749885 cache.go:157] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0108 20:48:44.265445  749885 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 493.113917ms
	I0108 20:48:44.265458  749885 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  0 B [_______________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  129.29 KiB / 287.99 MiB [] 0.04% ? p/s ?I0108 20:48:44.667507  749885 cache.go:157] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0108 20:48:44.669913  749885 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 897.059575ms
	I0108 20:48:44.669952  749885 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0108 20:48:44.693520  749885 cache.go:157] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0108 20:48:44.693543  749885 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 920.430348ms
	I0108 20:48:44.693578  749885 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  6.14 MiB / 287.99 MiB [>_] 2.13% ? p/s ?I0108 20:48:44.900059  749885 cache.go:157] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0108 20:48:44.900126  749885 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.128063786s
	I0108 20:48:44.900155  749885 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  10.97 MiB / 287.99 MiB  3.81% 18.31 MiB I0108 20:48:45.121372  749885 cache.go:157] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0108 20:48:45.121414  749885 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.34893745s
	I0108 20:48:45.121428  749885 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  20.48 MiB / 287.99 MiB  7.11% 18.31 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 18.31 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 18.74 MiB I0108 20:48:45.633030  749885 cache.go:157] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0108 20:48:45.633057  749885 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.860988221s
	I0108 20:48:45.633071  749885 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 18.74 MiB     > gcr.io/k8s-minikube/kicbase...:  32.24 MiB / 287.99 MiB  11.19% 18.74 MiB    > gcr.io/k8s-minikube/kicbase...:  41.10 MiB / 287.99 MiB  14.27% 19.16 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 19.16 MiBI0108 20:48:46.438541  749885 cache.go:157] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0108 20:48:46.438569  749885 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 2.666011676s
	I0108 20:48:46.438584  749885 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0108 20:48:46.438634  749885 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  58.51 MiB / 287.99 MiB  20.32% 19.16 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 20.80 MiB    > gcr.io/k8s-minikube/kicbase...:  75.39 MiB / 287.99 MiB  26.18% 20.80 MiB    > gcr.io/k8s-minikube/kicbase...:  87.82 MiB / 287.99 MiB  30.50% 20.80 MiB    > gcr.io/k8s-minikube/kicbase...:  98.15 MiB / 287.99 MiB  34.08% 22.72 MiB    > gcr.io/k8s-minikube/kicbase...:  108.90 MiB / 287.99 MiB  37.81% 22.72 Mi    > gcr.io/k8s-minikube/kicbase...:  118.00 MiB / 287.99 MiB  40.97% 22.72 Mi    > gcr.io/k8s-minikube/kicbase...:  126.93 MiB / 287.99 MiB  44.08% 24.35 Mi    > gcr.io/k8s-minikube/kicbase...:  134.14 MiB / 287.99 MiB  46.58% 24.35 Mi    > gcr.io/k8s-minikube/kicbase...:  142.76 MiB / 287.99 MiB  49.57% 24.35 Mi    > gcr.io/k8s-minikube/kicbase...:  152.95 MiB / 287.99 MiB  53.11% 25.58 Mi    > gcr.io/k8s-minikube/kicbase...:  165.33 MiB / 287.99 MiB  57.41% 25.58 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.
63% 25.58 Mi    > gcr.io/k8s-minikube/kicbase...:  173.76 MiB / 287.99 MiB  60.33% 26.16 Mi    > gcr.io/k8s-minikube/kicbase...:  188.22 MiB / 287.99 MiB  65.36% 26.16 Mi    > gcr.io/k8s-minikube/kicbase...:  203.34 MiB / 287.99 MiB  70.61% 26.16 Mi    > gcr.io/k8s-minikube/kicbase...:  209.71 MiB / 287.99 MiB  72.82% 28.34 Mi    > gcr.io/k8s-minikube/kicbase...:  217.90 MiB / 287.99 MiB  75.66% 28.34 Mi    > gcr.io/k8s-minikube/kicbase...:  223.77 MiB / 287.99 MiB  77.70% 28.34 Mi    > gcr.io/k8s-minikube/kicbase...:  225.09 MiB / 287.99 MiB  78.16% 28.17 Mi    > gcr.io/k8s-minikube/kicbase...:  231.90 MiB / 287.99 MiB  80.52% 28.17 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 28.17 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 27.75 Mi    > gcr.io/k8s-minikube/kicbase...:  244.76 MiB / 287.99 MiB  84.99% 27.75 Mi    > gcr.io/k8s-minikube/kicbase...:  261.21 MiB / 287.99 MiB  90.70% 27.75 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB
92.03% 28.86 Mi    > gcr.io/k8s-minikube/kicbase...:  273.05 MiB / 287.99 MiB  94.81% 28.86 Mi    > gcr.io/k8s-minikube/kicbase...:  285.68 MiB / 287.99 MiB  99.20% 28.86 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 29.46 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 29.46 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 29.46 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 27.56 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 27.56 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 27.56 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 25.78 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 25.78 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 30.83 MI0108 20:48:53.681364  749885 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72
e5b282adea674ee67882f59f4f546e as a tarball
	I0108 20:48:53.681377  749885 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0108 20:48:54.682855  749885 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0108 20:48:54.682894  749885 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:48:54.682944  749885 start.go:365] acquiring machines lock for missing-upgrade-759449: {Name:mk4f847c22ee2de2dfad9da9979f7eea8b0603d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:48:54.683017  749885 start.go:369] acquired machines lock for "missing-upgrade-759449" in 54.605µs
	I0108 20:48:54.683037  749885 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:48:54.683044  749885 fix.go:54] fixHost starting: 
	I0108 20:48:54.683336  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:54.711768  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:48:54.711827  749885 fix.go:102] recreateIfNeeded on missing-upgrade-759449: state= err=unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:54.711852  749885 fix.go:107] machineExists: false. err=machine does not exist
	I0108 20:48:54.725321  749885 out.go:177] * docker "missing-upgrade-759449" container is missing, will recreate.
	I0108 20:48:54.736473  749885 delete.go:124] DEMOLISHING missing-upgrade-759449 ...
	I0108 20:48:54.736587  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:54.761975  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	W0108 20:48:54.762033  749885 stop.go:75] unable to get state: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:54.762052  749885 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:54.762639  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:54.788117  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:48:54.788211  749885 delete.go:82] Unable to get host status for missing-upgrade-759449, assuming it has already been deleted: state: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:54.788283  749885 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-759449
	W0108 20:48:54.817492  749885 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-759449 returned with exit code 1
	I0108 20:48:54.817523  749885 kic.go:371] could not find the container missing-upgrade-759449 to remove it. will try anyways
	I0108 20:48:54.817574  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:54.846156  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	W0108 20:48:54.846207  749885 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:54.846278  749885 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-759449 /bin/bash -c "sudo init 0"
	W0108 20:48:54.865640  749885 cli_runner.go:211] docker exec --privileged -t missing-upgrade-759449 /bin/bash -c "sudo init 0" returned with exit code 1
	I0108 20:48:54.865669  749885 oci.go:650] error shutdown missing-upgrade-759449: docker exec --privileged -t missing-upgrade-759449 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:55.865816  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:55.888489  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:48:55.888545  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:55.888560  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:48:55.888590  749885 retry.go:31] will retry after 487.287421ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:56.376091  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:56.392530  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:48:56.392587  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:56.392600  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:48:56.392625  749885 retry.go:31] will retry after 617.320052ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:57.010520  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:57.032287  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:48:57.032348  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:57.032361  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:48:57.032386  749885 retry.go:31] will retry after 1.011709269s: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:58.044320  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:58.077504  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:48:58.077577  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:58.077588  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:48:58.077614  749885 retry.go:31] will retry after 1.365247122s: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:59.443070  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:48:59.460269  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:48:59.460335  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:48:59.460343  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:48:59.460368  749885 retry.go:31] will retry after 1.735942692s: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:49:01.196473  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:49:01.220822  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:49:01.220885  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:49:01.220911  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:49:01.220937  749885 retry.go:31] will retry after 4.96052379s: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:49:06.184080  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:49:06.200435  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:49:06.200501  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:49:06.200523  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:49:06.200553  749885 retry.go:31] will retry after 3.553681208s: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:49:09.754463  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:49:09.771556  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:49:09.771616  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:49:09.771638  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:49:09.771665  749885 retry.go:31] will retry after 4.857656316s: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:49:14.630104  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:49:14.647337  749885 cli_runner.go:211] docker container inspect missing-upgrade-759449 --format={{.State.Status}} returned with exit code 1
	I0108 20:49:14.647409  749885 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	I0108 20:49:14.647423  749885 oci.go:664] temporary error: container missing-upgrade-759449 status is  but expect it to be exited
	I0108 20:49:14.647456  749885 oci.go:88] couldn't shut down missing-upgrade-759449 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-759449": docker container inspect missing-upgrade-759449 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-759449
	 
	I0108 20:49:14.647527  749885 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-759449
	I0108 20:49:14.664861  749885 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-759449
	W0108 20:49:14.683730  749885 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-759449 returned with exit code 1
	I0108 20:49:14.683851  749885 cli_runner.go:164] Run: docker network inspect missing-upgrade-759449 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:49:14.702512  749885 cli_runner.go:164] Run: docker network rm missing-upgrade-759449
	I0108 20:49:14.814818  749885 fix.go:114] Sleeping 1 second for extra luck!
	I0108 20:49:15.815597  749885 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:49:15.818474  749885 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 20:49:15.818627  749885 start.go:159] libmachine.API.Create for "missing-upgrade-759449" (driver="docker")
	I0108 20:49:15.818652  749885 client.go:168] LocalClient.Create starting
	I0108 20:49:15.818733  749885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem
	I0108 20:49:15.818769  749885 main.go:141] libmachine: Decoding PEM data...
	I0108 20:49:15.818787  749885 main.go:141] libmachine: Parsing certificate...
	I0108 20:49:15.818851  749885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem
	I0108 20:49:15.818874  749885 main.go:141] libmachine: Decoding PEM data...
	I0108 20:49:15.818891  749885 main.go:141] libmachine: Parsing certificate...
	I0108 20:49:15.819134  749885 cli_runner.go:164] Run: docker network inspect missing-upgrade-759449 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:49:15.838927  749885 cli_runner.go:211] docker network inspect missing-upgrade-759449 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:49:15.839012  749885 network_create.go:281] running [docker network inspect missing-upgrade-759449] to gather additional debugging logs...
	I0108 20:49:15.839035  749885 cli_runner.go:164] Run: docker network inspect missing-upgrade-759449
	W0108 20:49:15.855835  749885 cli_runner.go:211] docker network inspect missing-upgrade-759449 returned with exit code 1
	I0108 20:49:15.855866  749885 network_create.go:284] error running [docker network inspect missing-upgrade-759449]: docker network inspect missing-upgrade-759449: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-759449 not found
	I0108 20:49:15.855880  749885 network_create.go:286] output of [docker network inspect missing-upgrade-759449]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-759449 not found
	
	** /stderr **
	I0108 20:49:15.855983  749885 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:49:15.873281  749885 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c71a8e375fca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e7:14:80:72} reservation:<nil>}
	I0108 20:49:15.873870  749885 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3e130247834c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bb:41:96:80} reservation:<nil>}
	I0108 20:49:15.874492  749885 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-292cd6855718 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:50:2d:d6:f1} reservation:<nil>}
	I0108 20:49:15.875397  749885 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40036c5070}
	I0108 20:49:15.875421  749885 network_create.go:124] attempt to create docker network missing-upgrade-759449 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0108 20:49:15.875490  749885 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-759449 missing-upgrade-759449
	I0108 20:49:15.953296  749885 network_create.go:108] docker network missing-upgrade-759449 192.168.76.0/24 created
	I0108 20:49:15.953329  749885 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-759449" container
	I0108 20:49:15.953405  749885 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:49:15.970347  749885 cli_runner.go:164] Run: docker volume create missing-upgrade-759449 --label name.minikube.sigs.k8s.io=missing-upgrade-759449 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:49:15.987533  749885 oci.go:103] Successfully created a docker volume missing-upgrade-759449
	I0108 20:49:15.987620  749885 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-759449-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-759449 --entrypoint /usr/bin/test -v missing-upgrade-759449:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0108 20:49:16.638602  749885 oci.go:107] Successfully prepared a docker volume missing-upgrade-759449
	I0108 20:49:16.638643  749885 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0108 20:49:16.638786  749885 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:49:16.638902  749885 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:49:16.708271  749885 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-759449 --name missing-upgrade-759449 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-759449 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-759449 --network missing-upgrade-759449 --ip 192.168.76.2 --volume missing-upgrade-759449:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0108 20:49:17.080559  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Running}}
	I0108 20:49:17.111284  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	I0108 20:49:17.139790  749885 cli_runner.go:164] Run: docker exec missing-upgrade-759449 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:49:17.218365  749885 oci.go:144] the created container "missing-upgrade-759449" has a running status.
	I0108 20:49:17.218394  749885 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa...
	I0108 20:49:17.395584  749885 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:49:17.430614  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	I0108 20:49:17.455324  749885 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:49:17.455342  749885 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-759449 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:49:17.520127  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	I0108 20:49:17.543706  749885 machine.go:88] provisioning docker machine ...
	I0108 20:49:17.543737  749885 ubuntu.go:169] provisioning hostname "missing-upgrade-759449"
	I0108 20:49:17.543808  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:17.577887  749885 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:17.578351  749885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0108 20:49:17.578364  749885 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-759449 && echo "missing-upgrade-759449" | sudo tee /etc/hostname
	I0108 20:49:17.579161  749885 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0108 20:49:20.734615  749885 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-759449
	
	I0108 20:49:20.734695  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:20.757244  749885 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:20.757640  749885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0108 20:49:20.757657  749885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-759449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-759449/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-759449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:49:20.903366  749885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:49:20.903434  749885 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-633350/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-633350/.minikube}
	I0108 20:49:20.903479  749885 ubuntu.go:177] setting up certificates
	I0108 20:49:20.903519  749885 provision.go:83] configureAuth start
	I0108 20:49:20.903602  749885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-759449
	I0108 20:49:20.921638  749885 provision.go:138] copyHostCerts
	I0108 20:49:20.921699  749885 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem, removing ...
	I0108 20:49:20.921708  749885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:49:20.921785  749885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem (1082 bytes)
	I0108 20:49:20.921875  749885 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem, removing ...
	I0108 20:49:20.921880  749885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:49:20.921905  749885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem (1123 bytes)
	I0108 20:49:20.921956  749885 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem, removing ...
	I0108 20:49:20.921961  749885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:49:20.921983  749885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem (1679 bytes)
	I0108 20:49:20.922024  749885 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-759449 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-759449]
	I0108 20:49:21.509624  749885 provision.go:172] copyRemoteCerts
	I0108 20:49:21.509694  749885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:49:21.509737  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:21.528544  749885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa Username:docker}
	I0108 20:49:21.628006  749885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:49:21.650572  749885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 20:49:21.672420  749885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:49:21.694631  749885 provision.go:86] duration metric: configureAuth took 791.075751ms
	I0108 20:49:21.694662  749885 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:49:21.694839  749885 config.go:182] Loaded profile config "missing-upgrade-759449": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 20:49:21.694948  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:21.712686  749885 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:21.713110  749885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0108 20:49:21.713132  749885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:49:22.110492  749885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:49:22.110511  749885 machine.go:91] provisioned docker machine in 4.566784471s
	I0108 20:49:22.110521  749885 client.go:171] LocalClient.Create took 6.291863659s
	I0108 20:49:22.110535  749885 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-759449" took 6.291908976s
	I0108 20:49:22.110543  749885 start.go:300] post-start starting for "missing-upgrade-759449" (driver="docker")
	I0108 20:49:22.110553  749885 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:49:22.110624  749885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:49:22.110670  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:22.128878  749885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa Username:docker}
	I0108 20:49:22.227212  749885 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:49:22.230852  749885 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:49:22.230881  749885 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:49:22.230895  749885 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:49:22.230902  749885 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0108 20:49:22.230912  749885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/addons for local assets ...
	I0108 20:49:22.230968  749885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/files for local assets ...
	I0108 20:49:22.231051  749885 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> 6387322.pem in /etc/ssl/certs
	I0108 20:49:22.231153  749885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:49:22.239270  749885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:49:22.260215  749885 start.go:303] post-start completed in 149.657357ms
	I0108 20:49:22.260562  749885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-759449
	I0108 20:49:22.277830  749885 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/missing-upgrade-759449/config.json ...
	I0108 20:49:22.278111  749885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:49:22.278153  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:22.295403  749885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa Username:docker}
	I0108 20:49:22.392057  749885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:49:22.397026  749885 start.go:128] duration metric: createHost completed in 6.581391088s
	I0108 20:49:22.397122  749885 cli_runner.go:164] Run: docker container inspect missing-upgrade-759449 --format={{.State.Status}}
	W0108 20:49:22.414348  749885 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:49:22.414377  749885 machine.go:88] provisioning docker machine ...
	I0108 20:49:22.414394  749885 ubuntu.go:169] provisioning hostname "missing-upgrade-759449"
	I0108 20:49:22.414588  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:22.432741  749885 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:22.433148  749885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0108 20:49:22.433166  749885 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-759449 && echo "missing-upgrade-759449" | sudo tee /etc/hostname
	I0108 20:49:22.580874  749885 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-759449
	
	I0108 20:49:22.580952  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:22.598747  749885 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:22.599151  749885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0108 20:49:22.599175  749885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-759449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-759449/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-759449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:49:22.739168  749885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:49:22.739197  749885 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-633350/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-633350/.minikube}
	I0108 20:49:22.739256  749885 ubuntu.go:177] setting up certificates
	I0108 20:49:22.739271  749885 provision.go:83] configureAuth start
	I0108 20:49:22.739347  749885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-759449
	I0108 20:49:22.757119  749885 provision.go:138] copyHostCerts
	I0108 20:49:22.757182  749885 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem, removing ...
	I0108 20:49:22.757196  749885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:49:22.757270  749885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem (1082 bytes)
	I0108 20:49:22.757363  749885 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem, removing ...
	I0108 20:49:22.757374  749885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:49:22.757401  749885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem (1123 bytes)
	I0108 20:49:22.757454  749885 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem, removing ...
	I0108 20:49:22.757464  749885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:49:22.757489  749885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem (1679 bytes)
	I0108 20:49:22.757543  749885 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-759449 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-759449]
	I0108 20:49:23.448005  749885 provision.go:172] copyRemoteCerts
	I0108 20:49:23.448080  749885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:49:23.448132  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:23.475564  749885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa Username:docker}
	I0108 20:49:23.582864  749885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:49:23.606560  749885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 20:49:23.629328  749885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:49:23.656500  749885 provision.go:86] duration metric: configureAuth took 917.214289ms
	I0108 20:49:23.656529  749885 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:49:23.656704  749885 config.go:182] Loaded profile config "missing-upgrade-759449": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 20:49:23.656815  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:23.683866  749885 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:23.684268  749885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0108 20:49:23.684286  749885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:49:24.011316  749885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:49:24.011344  749885 machine.go:91] provisioned docker machine in 1.596959271s
	I0108 20:49:24.011354  749885 start.go:300] post-start starting for "missing-upgrade-759449" (driver="docker")
	I0108 20:49:24.011365  749885 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:49:24.011429  749885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:49:24.011483  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:24.037553  749885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa Username:docker}
	I0108 20:49:24.136048  749885 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:49:24.140226  749885 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:49:24.140252  749885 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:49:24.140263  749885 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:49:24.140274  749885 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0108 20:49:24.140287  749885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/addons for local assets ...
	I0108 20:49:24.140345  749885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/files for local assets ...
	I0108 20:49:24.140424  749885 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> 6387322.pem in /etc/ssl/certs
	I0108 20:49:24.140532  749885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:49:24.152719  749885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:49:24.175891  749885 start.go:303] post-start completed in 164.520694ms
	I0108 20:49:24.175973  749885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:49:24.176019  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:24.196487  749885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa Username:docker}
	I0108 20:49:24.292988  749885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:49:24.299127  749885 fix.go:56] fixHost completed within 29.616077389s
	I0108 20:49:24.299157  749885 start.go:83] releasing machines lock for "missing-upgrade-759449", held for 29.61613055s
	I0108 20:49:24.299233  749885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-759449
	I0108 20:49:24.317571  749885 ssh_runner.go:195] Run: cat /version.json
	I0108 20:49:24.317627  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:24.317851  749885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:49:24.317904  749885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-759449
	I0108 20:49:24.349419  749885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa Username:docker}
	I0108 20:49:24.363887  749885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/missing-upgrade-759449/id_rsa Username:docker}
	W0108 20:49:24.447109  749885 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 20:49:24.447253  749885 ssh_runner.go:195] Run: systemctl --version
	I0108 20:49:24.572168  749885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:49:24.664745  749885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:49:24.670037  749885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:49:24.690213  749885 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:49:24.690361  749885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:49:24.722169  749885 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:49:24.722240  749885 start.go:475] detecting cgroup driver to use...
	I0108 20:49:24.722287  749885 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:49:24.722364  749885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:49:24.750994  749885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:49:24.763032  749885 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:49:24.763096  749885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:49:24.774425  749885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:49:24.786117  749885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 20:49:24.799151  749885 docker.go:227] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 20:49:24.799214  749885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:49:24.899512  749885 docker.go:233] disabling docker service ...
	I0108 20:49:24.899621  749885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:49:24.912432  749885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:49:24.924499  749885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:49:25.023707  749885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:49:25.130618  749885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:49:25.142053  749885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:49:25.158298  749885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 20:49:25.158381  749885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:49:25.171851  749885 out.go:177] 
	W0108 20:49:25.174075  749885 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 20:49:25.174093  749885 out.go:239] * 
	* 
	W0108 20:49:25.175048  749885 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 20:49:25.177537  749885 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-759449 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2024-01-08 20:49:25.226165155 +0000 UTC m=+2395.601036336
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-759449
helpers_test.go:235: (dbg) docker inspect missing-upgrade-759449:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6bf934508cf17b86fd64816c1e963aeee4dc315a85aeaa6a272795079a213c88",
	        "Created": "2024-01-08T20:49:16.725088796Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 751720,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:49:17.069300429Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/6bf934508cf17b86fd64816c1e963aeee4dc315a85aeaa6a272795079a213c88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6bf934508cf17b86fd64816c1e963aeee4dc315a85aeaa6a272795079a213c88/hostname",
	        "HostsPath": "/var/lib/docker/containers/6bf934508cf17b86fd64816c1e963aeee4dc315a85aeaa6a272795079a213c88/hosts",
	        "LogPath": "/var/lib/docker/containers/6bf934508cf17b86fd64816c1e963aeee4dc315a85aeaa6a272795079a213c88/6bf934508cf17b86fd64816c1e963aeee4dc315a85aeaa6a272795079a213c88-json.log",
	        "Name": "/missing-upgrade-759449",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-759449:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-759449",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c380036211bcbd12603d351cec038933dc46ca7ce4e6e8c7cca91c856e3e97bd-init/diff:/var/lib/docker/overlay2/7bd7f7d4f7a96e360ebc178b00c82173ead4fb4a7e97b613498165aac8813ecf/diff:/var/lib/docker/overlay2/8180366f4c6b833bcf9b4327f9b057ee87e978a286e144df96fc760862654ace/diff:/var/lib/docker/overlay2/112ef20a8443a4a561aa58e38019216936a3ad7223ac66077ee7d10eacc016d6/diff:/var/lib/docker/overlay2/d4103d566aeef7a7c2040d09072e1446c0e813cdc13b758f9cac63aae801baa1/diff:/var/lib/docker/overlay2/35572e893fe9d0ae80de57760a5cb0035d2935e15fa65803b1354b0bb610627c/diff:/var/lib/docker/overlay2/86b1a16d14129e584758e0b2290c1cdc8dc7e82fa05237ab760c8bf16de51c1b/diff:/var/lib/docker/overlay2/a3769e472e33900fa6c426d1e8fde102b94a3d253e842f76f3e72a1309dde2cb/diff:/var/lib/docker/overlay2/a18991605fe2bf8b79707e46faf9cd37b2890f8a0bea7f3d2f91668ef93874c0/diff:/var/lib/docker/overlay2/6b841945f670410ce69847cb472903ebab88dac748da1dca4c062587d8f0ccac/diff:/var/lib/docker/overlay2/0dad9a
00f32482e5a12c2e70624ec9236d3356b3863fe3c4e53e7cee885f6b93/diff:/var/lib/docker/overlay2/094f41a465af910b5e8ce6181ce0d9f06fc3b70ced5f3383fff3811b8426f1e1/diff:/var/lib/docker/overlay2/835d5068609467475f8db30db20f113e109e190d8d62d7d8cea5588bd8c1c08d/diff:/var/lib/docker/overlay2/99855b5a99e91577496bc7687590efa58ae096fc883260fe9abf28622b9bded4/diff:/var/lib/docker/overlay2/a473c76d569c986924bff7cf42823b9de18f083dabd5c87feca03b6e1b558d56/diff:/var/lib/docker/overlay2/2cb2b315f62d4b1442c38474fa8cb730bf1a0805e75cdeebdc206c689901ab1d/diff:/var/lib/docker/overlay2/e2f15de7c17e9282dd753e0b57063ac6ec084da3d4cd45a56aac2495842a263f/diff:/var/lib/docker/overlay2/230acaf72a082251ba308b271ced726b3bddafb8fe65d09f3f99664aa7c51d6e/diff:/var/lib/docker/overlay2/0a5ced5ab52b718b50d00a8ba29367139fade678014ffe35be2159ffd6153a43/diff:/var/lib/docker/overlay2/b00ac83a30ab80b3da73206553aa4bbaaa83d5e5b0caee3a8380cd0cd0680f47/diff:/var/lib/docker/overlay2/c7df2ed36ebf73b6aaaae2e85d74525d2241c688bce62a9e35ecc3ee2978643c/diff:/var/lib/d
ocker/overlay2/34ec4f23c36dbb11e530f4b1c41ba722f3cb9e42408e36e4bbb7201b6b92e8a3/diff:/var/lib/docker/overlay2/c254df4f1ca37631da330237a0b14a97899443adc3d1ad0464fd53647495697d/diff:/var/lib/docker/overlay2/93d5a29e06840eb21aa8205170b35da7878e8ad5cc26f14284e3e9cbbc81e29b/diff:/var/lib/docker/overlay2/15edb436f7a6f17bfdbdd3b9c20158148f279bede62e0158bfbc0b3cd0fc67a8/diff:/var/lib/docker/overlay2/ea0e01e19f2669c3f2bd2e74af6cac307b886b6460dce465b0af91d2df65be2b/diff:/var/lib/docker/overlay2/9b02dd226da96954107ff3eecafc3da5c10ef298f98e56b6db7c96f356ef376d/diff:/var/lib/docker/overlay2/43115b34af7cd2f094683526712a0004c8a7cff9cd349cb02bb15f483baa9183/diff:/var/lib/docker/overlay2/35f542e833800a7b5faaf289f862fffbeaed8560bff7cfd325ff8dd8766020fd/diff:/var/lib/docker/overlay2/995aed6fb219dff8553edf4a1429d254dcd808bed821cbb23a6d9d5fab1a4be7/diff:/var/lib/docker/overlay2/ecef601f547d527773e5f16c9c909e0a665512ec2b7c55972feea8c77ceb23a9/diff:/var/lib/docker/overlay2/df205efa3222d274da18e41f9f8a6b757548e13cc64c553f396e0cdaf49
e8435/diff:/var/lib/docker/overlay2/754fd26fd5ed4dcbe16a301f62e223982efee49bd398f0d8c5cf551944e80848/diff:/var/lib/docker/overlay2/8acda0b26743e7e9aad8901f1ebf9614ffc34dfb2be4a2ec5f0f3833b02ac9fc/diff:/var/lib/docker/overlay2/0c7b2c530e7c4ade75d4c066fc3e35723f9325143f6e92f10231b70a070135ea/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c380036211bcbd12603d351cec038933dc46ca7ce4e6e8c7cca91c856e3e97bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c380036211bcbd12603d351cec038933dc46ca7ce4e6e8c7cca91c856e3e97bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c380036211bcbd12603d351cec038933dc46ca7ce4e6e8c7cca91c856e3e97bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-759449",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-759449/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-759449",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-759449",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-759449",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9dca3899429ca00874c92fc9fa9e09ec24e9916c201a431e180a99b85bbbbfa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33578"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33577"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33574"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33576"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33575"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a9dca3899429",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-759449": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6bf934508cf1",
	                        "missing-upgrade-759449"
	                    ],
	                    "NetworkID": "65f89f9d285f5d70e83e70a40d49e0c760bc032d6c5a7dbc74cd9a1fda1129c8",
	                    "EndpointID": "c67ca0109742e0ede02294e482d46df9ac8d15c1935cc32b4c74252e3c9d2f54",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-759449 -n missing-upgrade-759449
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-759449 -n missing-upgrade-759449: exit status 6 (319.193139ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:49:25.549812  752774 status.go:415] kubeconfig endpoint: got: 192.168.59.154:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-759449" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-759449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-759449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-759449: (1.852600444s)
--- FAIL: TestMissingContainerUpgrade (188.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1837240470.exe start -p stopped-upgrade-272282 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.1837240470.exe start -p stopped-upgrade-272282 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m26.772960359s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.1837240470.exe -p stopped-upgrade-272282 stop
E0108 20:51:06.120439  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.1837240470.exe -p stopped-upgrade-272282 stop: (11.899596079s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-272282 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-272282 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.338231575s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-272282] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-272282 in cluster stopped-upgrade-272282
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Restarting existing docker container for "stopped-upgrade-272282" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:51:07.978144  757531 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:51:07.980255  757531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:51:07.980263  757531 out.go:309] Setting ErrFile to fd 2...
	I0108 20:51:07.980269  757531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:51:07.980552  757531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:51:07.980964  757531 out.go:303] Setting JSON to false
	I0108 20:51:07.985404  757531 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12810,"bootTime":1704734258,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:51:07.985500  757531 start.go:138] virtualization:  
	I0108 20:51:07.988799  757531 out.go:177] * [stopped-upgrade-272282] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:51:07.991096  757531 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0108 20:51:07.995889  757531 notify.go:220] Checking for updates...
	I0108 20:51:07.999129  757531 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:51:08.001575  757531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:51:08.003739  757531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:51:08.006788  757531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:51:08.008823  757531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:51:08.011252  757531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:51:08.013893  757531 config.go:182] Loaded profile config "stopped-upgrade-272282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 20:51:08.016656  757531 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 20:51:08.020565  757531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:51:08.085339  757531 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:51:08.085465  757531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:51:08.245117  757531 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 20:51:08.226956409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:51:08.245233  757531 docker.go:295] overlay module found
	I0108 20:51:08.248975  757531 out.go:177] * Using the docker driver based on existing profile
	I0108 20:51:08.250890  757531 start.go:298] selected driver: docker
	I0108 20:51:08.250905  757531 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-272282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-272282 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.155 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 20:51:08.250997  757531 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:51:08.251590  757531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:51:08.261332  757531 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0108 20:51:08.368076  757531 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 20:51:08.358815476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:51:08.368368  757531 cni.go:84] Creating CNI manager for ""
	I0108 20:51:08.368400  757531 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:51:08.368413  757531 start_flags.go:323] config:
	{Name:stopped-upgrade-272282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-272282 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.155 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 20:51:08.371439  757531 out.go:177] * Starting control plane node stopped-upgrade-272282 in cluster stopped-upgrade-272282
	I0108 20:51:08.373394  757531 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:51:08.375440  757531 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:51:08.377460  757531 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0108 20:51:08.377545  757531 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0108 20:51:08.400428  757531 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0108 20:51:08.400456  757531 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0108 20:51:08.454621  757531 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0108 20:51:08.454784  757531 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/stopped-upgrade-272282/config.json ...
	I0108 20:51:08.454909  757531 cache.go:107] acquiring lock: {Name:mk3c8286e2cc2bf23333f2fde93bbbffaca2d67d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.454998  757531 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 20:51:08.455011  757531 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.22µs
	I0108 20:51:08.455027  757531 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 20:51:08.455038  757531 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:51:08.455037  757531 cache.go:107] acquiring lock: {Name:mk44cb6b843ba721f847f64865744c5f7915221a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.455068  757531 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0108 20:51:08.455073  757531 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 37.67µs
	I0108 20:51:08.455071  757531 start.go:365] acquiring machines lock for stopped-upgrade-272282: {Name:mk5c1049f8b80677520c204a713596e0408c48f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.455080  757531 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0108 20:51:08.455090  757531 cache.go:107] acquiring lock: {Name:mkdf84d353c206e379592d67df524a8e57bb96f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.455112  757531 start.go:369] acquired machines lock for "stopped-upgrade-272282" in 28.374µs
	I0108 20:51:08.455117  757531 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0108 20:51:08.455122  757531 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 33.691µs
	I0108 20:51:08.455128  757531 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:51:08.455129  757531 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0108 20:51:08.455134  757531 fix.go:54] fixHost starting: 
	I0108 20:51:08.455159  757531 cache.go:107] acquiring lock: {Name:mk94eab46ef127117a2ac55cb5fea6764e134f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.455198  757531 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0108 20:51:08.455206  757531 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 48.173µs
	I0108 20:51:08.455212  757531 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0108 20:51:08.455222  757531 cache.go:107] acquiring lock: {Name:mk1f9dd73c9040b4843877ea6d579cd2a6afc14d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.455250  757531 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0108 20:51:08.455256  757531 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 35.151µs
	I0108 20:51:08.455263  757531 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0108 20:51:08.455271  757531 cache.go:107] acquiring lock: {Name:mkc0e5eb4ee5b95208370bf9ab86e472522e23cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.455296  757531 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0108 20:51:08.455301  757531 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 30.302µs
	I0108 20:51:08.455307  757531 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0108 20:51:08.455315  757531 cache.go:107] acquiring lock: {Name:mke00c1fa35ee123cfffa38e119041010daad15e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.455338  757531 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0108 20:51:08.455343  757531 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 29.128µs
	I0108 20:51:08.455349  757531 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0108 20:51:08.455357  757531 cache.go:107] acquiring lock: {Name:mke5483038fdde0966ca33aae1d2ab3eafd4be68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:51:08.455389  757531 cache.go:115] /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0108 20:51:08.455394  757531 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 37.916µs
	I0108 20:51:08.455402  757531 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0108 20:51:08.455408  757531 cache.go:87] Successfully saved all images to host disk.
	I0108 20:51:08.455428  757531 cli_runner.go:164] Run: docker container inspect stopped-upgrade-272282 --format={{.State.Status}}
	I0108 20:51:08.473820  757531 fix.go:102] recreateIfNeeded on stopped-upgrade-272282: state=Stopped err=<nil>
	W0108 20:51:08.473853  757531 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:51:08.477696  757531 out.go:177] * Restarting existing docker container for "stopped-upgrade-272282" ...
	I0108 20:51:08.479986  757531 cli_runner.go:164] Run: docker start stopped-upgrade-272282
	I0108 20:51:08.796553  757531 cli_runner.go:164] Run: docker container inspect stopped-upgrade-272282 --format={{.State.Status}}
	I0108 20:51:08.816888  757531 kic.go:430] container "stopped-upgrade-272282" state is running.
	I0108 20:51:08.818877  757531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-272282
	I0108 20:51:08.846392  757531 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/stopped-upgrade-272282/config.json ...
	I0108 20:51:08.846687  757531 machine.go:88] provisioning docker machine ...
	I0108 20:51:08.846712  757531 ubuntu.go:169] provisioning hostname "stopped-upgrade-272282"
	I0108 20:51:08.846765  757531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-272282
	I0108 20:51:08.874786  757531 main.go:141] libmachine: Using SSH client type: native
	I0108 20:51:08.875237  757531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33586 <nil> <nil>}
	I0108 20:51:08.875256  757531 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-272282 && echo "stopped-upgrade-272282" | sudo tee /etc/hostname
	I0108 20:51:08.875840  757531 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0108 20:51:12.030991  757531 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-272282
	
	I0108 20:51:12.031073  757531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-272282
	I0108 20:51:12.053569  757531 main.go:141] libmachine: Using SSH client type: native
	I0108 20:51:12.053979  757531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33586 <nil> <nil>}
	I0108 20:51:12.054004  757531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-272282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-272282/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-272282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:51:12.195275  757531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:51:12.195305  757531 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-633350/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-633350/.minikube}
	I0108 20:51:12.195326  757531 ubuntu.go:177] setting up certificates
	I0108 20:51:12.195337  757531 provision.go:83] configureAuth start
	I0108 20:51:12.195403  757531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-272282
	I0108 20:51:12.213163  757531 provision.go:138] copyHostCerts
	I0108 20:51:12.213253  757531 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem, removing ...
	I0108 20:51:12.213273  757531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem
	I0108 20:51:12.213354  757531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/ca.pem (1082 bytes)
	I0108 20:51:12.213453  757531 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem, removing ...
	I0108 20:51:12.213462  757531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem
	I0108 20:51:12.213490  757531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/cert.pem (1123 bytes)
	I0108 20:51:12.213546  757531 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem, removing ...
	I0108 20:51:12.213557  757531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem
	I0108 20:51:12.213583  757531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-633350/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-633350/.minikube/key.pem (1679 bytes)
	I0108 20:51:12.214705  757531 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-272282 san=[192.168.59.155 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-272282]
	I0108 20:51:12.430499  757531 provision.go:172] copyRemoteCerts
	I0108 20:51:12.430577  757531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:51:12.430619  757531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-272282
	I0108 20:51:12.448680  757531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33586 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/stopped-upgrade-272282/id_rsa Username:docker}
	I0108 20:51:12.547601  757531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 20:51:12.570676  757531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:51:12.593753  757531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:51:12.616202  757531 provision.go:86] duration metric: configureAuth took 420.848209ms
	I0108 20:51:12.616229  757531 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:51:12.616442  757531 config.go:182] Loaded profile config "stopped-upgrade-272282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 20:51:12.616549  757531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-272282
	I0108 20:51:12.634959  757531 main.go:141] libmachine: Using SSH client type: native
	I0108 20:51:12.635377  757531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 33586 <nil> <nil>}
	I0108 20:51:12.635397  757531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:51:13.057009  757531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:51:13.057034  757531 machine.go:91] provisioned docker machine in 4.210334705s
	I0108 20:51:13.057045  757531 start.go:300] post-start starting for "stopped-upgrade-272282" (driver="docker")
	I0108 20:51:13.057056  757531 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:51:13.057115  757531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:51:13.057178  757531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-272282
	I0108 20:51:13.077212  757531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33586 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/stopped-upgrade-272282/id_rsa Username:docker}
	I0108 20:51:13.175542  757531 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:51:13.179338  757531 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:51:13.179365  757531 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:51:13.179385  757531 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:51:13.179419  757531 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0108 20:51:13.179431  757531 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/addons for local assets ...
	I0108 20:51:13.179503  757531 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-633350/.minikube/files for local assets ...
	I0108 20:51:13.179592  757531 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem -> 6387322.pem in /etc/ssl/certs
	I0108 20:51:13.179701  757531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:51:13.188200  757531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/ssl/certs/6387322.pem --> /etc/ssl/certs/6387322.pem (1708 bytes)
	I0108 20:51:13.212081  757531 start.go:303] post-start completed in 155.019484ms
	I0108 20:51:13.212164  757531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:51:13.212208  757531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-272282
	I0108 20:51:13.230136  757531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33586 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/stopped-upgrade-272282/id_rsa Username:docker}
	I0108 20:51:13.324237  757531 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:51:13.329519  757531 fix.go:56] fixHost completed within 4.874378184s
	I0108 20:51:13.329543  757531 start.go:83] releasing machines lock for "stopped-upgrade-272282", held for 4.874422s
	I0108 20:51:13.329648  757531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-272282
	I0108 20:51:13.348143  757531 ssh_runner.go:195] Run: cat /version.json
	I0108 20:51:13.348199  757531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-272282
	I0108 20:51:13.348210  757531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:51:13.348258  757531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-272282
	I0108 20:51:13.373266  757531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33586 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/stopped-upgrade-272282/id_rsa Username:docker}
	I0108 20:51:13.378836  757531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33586 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/stopped-upgrade-272282/id_rsa Username:docker}
	W0108 20:51:13.470928  757531 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 20:51:13.471011  757531 ssh_runner.go:195] Run: systemctl --version
	I0108 20:51:13.544226  757531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:51:13.653450  757531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:51:13.659067  757531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:51:13.679651  757531 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:51:13.679738  757531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:51:13.707586  757531 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:51:13.707607  757531 start.go:475] detecting cgroup driver to use...
	I0108 20:51:13.707638  757531 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:51:13.707690  757531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:51:13.736201  757531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:51:13.748360  757531 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:51:13.748480  757531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:51:13.760641  757531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:51:13.773045  757531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 20:51:13.785937  757531 docker.go:227] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 20:51:13.786012  757531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:51:13.896541  757531 docker.go:233] disabling docker service ...
	I0108 20:51:13.896662  757531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:51:13.909339  757531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:51:13.921865  757531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:51:14.024624  757531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:51:14.134252  757531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:51:14.146170  757531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:51:14.164112  757531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 20:51:14.164189  757531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:51:14.177685  757531 out.go:177] 
	W0108 20:51:14.179643  757531 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 20:51:14.179669  757531 out.go:239] * 
	* 
	W0108 20:51:14.180515  757531 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 20:51:14.183154  757531 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-272282 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (105.01s)

                                                
                                    

Test pass (278/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.42
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.4/json-events 14.07
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 18.72
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.24
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
26 TestBinaryMirror 0.63
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.11
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
32 TestAddons/Setup 156.91
34 TestAddons/parallel/Registry 15.84
36 TestAddons/parallel/InspektorGadget 11.98
37 TestAddons/parallel/MetricsServer 6.02
40 TestAddons/parallel/CSI 58.97
41 TestAddons/parallel/Headlamp 14.53
42 TestAddons/parallel/CloudSpanner 5.73
43 TestAddons/parallel/LocalPath 52.08
44 TestAddons/parallel/NvidiaDevicePlugin 6.73
45 TestAddons/parallel/Yakd 6
48 TestAddons/serial/GCPAuth/Namespaces 0.17
49 TestAddons/StoppedEnableDisable 12.36
50 TestCertOptions 34.74
51 TestCertExpiration 244.06
53 TestForceSystemdFlag 41.21
54 TestForceSystemdEnv 41.47
60 TestErrorSpam/setup 32.27
61 TestErrorSpam/start 0.85
62 TestErrorSpam/status 1.13
63 TestErrorSpam/pause 1.88
64 TestErrorSpam/unpause 1.93
65 TestErrorSpam/stop 1.5
68 TestFunctional/serial/CopySyncFile 0.01
69 TestFunctional/serial/StartWithProxy 77.08
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 33.66
72 TestFunctional/serial/KubeContext 0.07
73 TestFunctional/serial/KubectlGetPods 0.1
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.8
77 TestFunctional/serial/CacheCmd/cache/add_local 1.1
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
79 TestFunctional/serial/CacheCmd/cache/list 0.07
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
81 TestFunctional/serial/CacheCmd/cache/cache_reload 2.16
82 TestFunctional/serial/CacheCmd/cache/delete 0.15
83 TestFunctional/serial/MinikubeKubectlCmd 0.15
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
85 TestFunctional/serial/ExtraConfig 34.02
86 TestFunctional/serial/ComponentHealth 0.1
87 TestFunctional/serial/LogsCmd 1.81
88 TestFunctional/serial/LogsFileCmd 1.83
89 TestFunctional/serial/InvalidService 4.27
91 TestFunctional/parallel/ConfigCmd 0.59
92 TestFunctional/parallel/DashboardCmd 12.26
93 TestFunctional/parallel/DryRun 0.51
94 TestFunctional/parallel/InternationalLanguage 0.26
95 TestFunctional/parallel/StatusCmd 1.4
99 TestFunctional/parallel/ServiceCmdConnect 10.68
100 TestFunctional/parallel/AddonsCmd 0.18
101 TestFunctional/parallel/PersistentVolumeClaim 25.73
103 TestFunctional/parallel/SSHCmd 0.86
104 TestFunctional/parallel/CpCmd 2.26
106 TestFunctional/parallel/FileSync 0.37
107 TestFunctional/parallel/CertSync 2.4
111 TestFunctional/parallel/NodeLabels 0.1
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.83
115 TestFunctional/parallel/License 0.36
116 TestFunctional/parallel/Version/short 0.09
117 TestFunctional/parallel/Version/components 1.44
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.85
123 TestFunctional/parallel/ImageCommands/Setup 1.71
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.93
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
129 TestFunctional/parallel/ProfileCmd/profile_list 0.48
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.46
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.88
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.45
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.98
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.95
148 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
149 TestFunctional/parallel/ServiceCmd/List 0.54
150 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
152 TestFunctional/parallel/ServiceCmd/Format 0.44
153 TestFunctional/parallel/ServiceCmd/URL 0.43
154 TestFunctional/parallel/MountCmd/any-port 7.66
155 TestFunctional/parallel/MountCmd/specific-port 1.58
156 TestFunctional/parallel/MountCmd/VerifyCleanup 3.26
157 TestFunctional/delete_addon-resizer_images 0.09
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 96.27
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.49
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
170 TestJSONOutput/start/Command 53.68
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.82
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.74
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.99
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.27
195 TestKicCustomNetwork/create_custom_network 51.26
196 TestKicCustomNetwork/use_default_bridge_network 33.57
197 TestKicExistingNetwork 31.69
198 TestKicCustomSubnet 36.19
199 TestKicStaticIP 38.59
200 TestMainNoArgs 0.07
201 TestMinikubeProfile 69.82
204 TestMountStart/serial/StartWithMountFirst 7.13
205 TestMountStart/serial/VerifyMountFirst 0.3
206 TestMountStart/serial/StartWithMountSecond 10.4
207 TestMountStart/serial/VerifyMountSecond 0.29
208 TestMountStart/serial/DeleteFirst 1.68
209 TestMountStart/serial/VerifyMountPostDelete 0.3
210 TestMountStart/serial/Stop 1.22
211 TestMountStart/serial/RestartStopped 7.8
212 TestMountStart/serial/VerifyMountPostStop 0.31
215 TestMultiNode/serial/FreshStart2Nodes 93.55
216 TestMultiNode/serial/DeployApp2Nodes 5.92
218 TestMultiNode/serial/AddNode 48.97
219 TestMultiNode/serial/MultiNodeLabels 0.09
220 TestMultiNode/serial/ProfileList 0.35
221 TestMultiNode/serial/CopyFile 11.16
222 TestMultiNode/serial/StopNode 2.38
223 TestMultiNode/serial/StartAfterStop 12.67
224 TestMultiNode/serial/RestartKeepsNodes 123.5
225 TestMultiNode/serial/DeleteNode 5.15
226 TestMultiNode/serial/StopMultiNode 23.94
227 TestMultiNode/serial/RestartMultiNode 85.79
228 TestMultiNode/serial/ValidateNameConflict 34.61
233 TestPreload 171.25
235 TestScheduledStopUnix 113.86
238 TestInsufficientStorage 11.08
241 TestKubernetesUpgrade 396.95
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
245 TestNoKubernetes/serial/StartWithK8s 44.52
246 TestNoKubernetes/serial/StartWithStopK8s 12.12
247 TestNoKubernetes/serial/Start 8.78
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.43
249 TestNoKubernetes/serial/ProfileList 1.08
250 TestNoKubernetes/serial/Stop 1.25
251 TestNoKubernetes/serial/StartNoArgs 8.12
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
253 TestStoppedBinaryUpgrade/Setup 1.76
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
264 TestPause/serial/Start 81.39
265 TestPause/serial/SecondStartNoReconfiguration 43.88
266 TestPause/serial/Pause 1.03
267 TestPause/serial/VerifyStatus 0.55
268 TestPause/serial/Unpause 1.24
269 TestPause/serial/PauseAgain 1.72
270 TestPause/serial/DeletePaused 3.56
271 TestPause/serial/VerifyDeletedResources 0.83
279 TestNetworkPlugins/group/false 5.26
284 TestStartStop/group/old-k8s-version/serial/FirstStart 134.79
285 TestStartStop/group/old-k8s-version/serial/DeployApp 9.5
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
287 TestStartStop/group/old-k8s-version/serial/Stop 11.99
288 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
289 TestStartStop/group/old-k8s-version/serial/SecondStart 449.39
291 TestStartStop/group/no-preload/serial/FirstStart 65.54
292 TestStartStop/group/no-preload/serial/DeployApp 9.33
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
294 TestStartStop/group/no-preload/serial/Stop 11.99
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
296 TestStartStop/group/no-preload/serial/SecondStart 620.22
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.14
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
300 TestStartStop/group/old-k8s-version/serial/Pause 3.49
302 TestStartStop/group/embed-certs/serial/FirstStart 78.63
303 TestStartStop/group/embed-certs/serial/DeployApp 10.37
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
305 TestStartStop/group/embed-certs/serial/Stop 12.05
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
307 TestStartStop/group/embed-certs/serial/SecondStart 598.59
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/no-preload/serial/Pause 3.43
313 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.19
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.1
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
318 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 614.62
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
322 TestStartStop/group/embed-certs/serial/Pause 4.11
324 TestStartStop/group/newest-cni/serial/FirstStart 49.4
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
327 TestStartStop/group/newest-cni/serial/Stop 1.27
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
329 TestStartStop/group/newest-cni/serial/SecondStart 30.82
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
333 TestStartStop/group/newest-cni/serial/Pause 3.21
334 TestNetworkPlugins/group/auto/Start 77.01
335 TestNetworkPlugins/group/auto/KubeletFlags 0.34
336 TestNetworkPlugins/group/auto/NetCatPod 11.28
337 TestNetworkPlugins/group/auto/DNS 0.21
338 TestNetworkPlugins/group/auto/Localhost 0.18
339 TestNetworkPlugins/group/auto/HairPin 0.19
340 TestNetworkPlugins/group/kindnet/Start 78.23
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
342 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
344 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
345 TestNetworkPlugins/group/kindnet/NetCatPod 11.27
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.49
348 TestNetworkPlugins/group/calico/Start 90.09
349 TestNetworkPlugins/group/kindnet/DNS 0.26
350 TestNetworkPlugins/group/kindnet/Localhost 0.2
351 TestNetworkPlugins/group/kindnet/HairPin 0.22
352 TestNetworkPlugins/group/custom-flannel/Start 78.48
353 TestNetworkPlugins/group/calico/ControllerPod 6.01
354 TestNetworkPlugins/group/calico/KubeletFlags 0.33
355 TestNetworkPlugins/group/calico/NetCatPod 10.29
356 TestNetworkPlugins/group/calico/DNS 0.23
357 TestNetworkPlugins/group/calico/Localhost 0.19
358 TestNetworkPlugins/group/calico/HairPin 0.18
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
361 TestNetworkPlugins/group/custom-flannel/DNS 0.26
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
364 TestNetworkPlugins/group/enable-default-cni/Start 65.26
365 TestNetworkPlugins/group/flannel/Start 67.42
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
371 TestNetworkPlugins/group/flannel/ControllerPod 6.01
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.46
373 TestNetworkPlugins/group/flannel/NetCatPod 12.38
374 TestNetworkPlugins/group/bridge/Start 88.74
375 TestNetworkPlugins/group/flannel/DNS 0.24
376 TestNetworkPlugins/group/flannel/Localhost 0.21
377 TestNetworkPlugins/group/flannel/HairPin 0.25
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
379 TestNetworkPlugins/group/bridge/NetCatPod 12.28
380 TestNetworkPlugins/group/bridge/DNS 0.2
381 TestNetworkPlugins/group/bridge/Localhost 0.16
382 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (21.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-031263 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-031263 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (21.416269478s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-031263
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-031263: exit status 85 (98.022842ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-031263 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-031263        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:29.746087  638737 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:29.746301  638737 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:29.746326  638737 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:29.746347  638737 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:29.746669  638737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	W0108 20:09:29.746862  638737 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-633350/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-633350/.minikube/config/config.json: no such file or directory
	I0108 20:09:29.747332  638737 out.go:303] Setting JSON to true
	I0108 20:09:29.748157  638737 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10312,"bootTime":1704734258,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:09:29.748251  638737 start.go:138] virtualization:  
	I0108 20:09:29.751748  638737 out.go:97] [download-only-031263] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W0108 20:09:29.751970  638737 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 20:09:29.752031  638737 notify.go:220] Checking for updates...
	I0108 20:09:29.754151  638737 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:09:29.756726  638737 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:29.759339  638737 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:09:29.761581  638737 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:09:29.764022  638737 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 20:09:29.768602  638737 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:09:29.768847  638737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:09:29.792286  638737 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:09:29.792391  638737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:29.857369  638737 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-08 20:09:29.847922175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:09:29.857488  638737 docker.go:295] overlay module found
	I0108 20:09:29.859970  638737 out.go:97] Using the docker driver based on user configuration
	I0108 20:09:29.860014  638737 start.go:298] selected driver: docker
	I0108 20:09:29.860022  638737 start.go:902] validating driver "docker" against <nil>
	I0108 20:09:29.860116  638737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:29.933295  638737 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-08 20:09:29.924534335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:09:29.933452  638737 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:09:29.933727  638737 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0108 20:09:29.933896  638737 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 20:09:29.936233  638737 out.go:169] Using Docker driver with root privileges
	I0108 20:09:29.938678  638737 cni.go:84] Creating CNI manager for ""
	I0108 20:09:29.938698  638737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:09:29.938708  638737 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:09:29.938720  638737 start_flags.go:323] config:
	{Name:download-only-031263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-031263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:29.941380  638737 out.go:97] Starting control plane node download-only-031263 in cluster download-only-031263
	I0108 20:09:29.941400  638737 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:09:29.943901  638737 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:09:29.943939  638737 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 20:09:29.944081  638737 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:09:29.962209  638737 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:29.962351  638737 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:09:29.962476  638737 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:30.029596  638737 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0108 20:09:30.029628  638737 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:30.030280  638737 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 20:09:30.033373  638737 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 20:09:30.033414  638737 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0108 20:09:30.170552  638737 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0108 20:09:36.786274  638737 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:09:49.358432  638737 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0108 20:09:49.358542  638737 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-031263"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-031263 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-031263 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.071489955s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-031263
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-031263: exit status 85 (87.858181ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-031263 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-031263        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-031263 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-031263        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:51
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:51.256773  638812 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:51.257016  638812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:51.257042  638812 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:51.257060  638812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:51.257358  638812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	W0108 20:09:51.257554  638812 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-633350/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-633350/.minikube/config/config.json: no such file or directory
	I0108 20:09:51.257832  638812 out.go:303] Setting JSON to true
	I0108 20:09:51.258689  638812 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10334,"bootTime":1704734258,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:09:51.258841  638812 start.go:138] virtualization:  
	I0108 20:09:51.261814  638812 out.go:97] [download-only-031263] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:09:51.264689  638812 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:09:51.262135  638812 notify.go:220] Checking for updates...
	I0108 20:09:51.269373  638812 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:51.271734  638812 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:09:51.274526  638812 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:09:51.276642  638812 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 20:09:51.280717  638812 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:09:51.281299  638812 config.go:182] Loaded profile config "download-only-031263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0108 20:09:51.281365  638812 start.go:810] api.Load failed for download-only-031263: filestore "download-only-031263": Docker machine "download-only-031263" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:09:51.281469  638812 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:09:51.281500  638812 start.go:810] api.Load failed for download-only-031263: filestore "download-only-031263": Docker machine "download-only-031263" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:09:51.304775  638812 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:09:51.304889  638812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:51.383721  638812 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:09:51.373822419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:09:51.383823  638812 docker.go:295] overlay module found
	I0108 20:09:51.385970  638812 out.go:97] Using the docker driver based on existing profile
	I0108 20:09:51.386005  638812 start.go:298] selected driver: docker
	I0108 20:09:51.386015  638812 start.go:902] validating driver "docker" against &{Name:download-only-031263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-031263 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:51.386180  638812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:51.450728  638812 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:09:51.44109945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:09:51.451222  638812 cni.go:84] Creating CNI manager for ""
	I0108 20:09:51.451241  638812 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:09:51.451253  638812 start_flags.go:323] config:
	{Name:download-only-031263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-031263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I0108 20:09:51.453840  638812 out.go:97] Starting control plane node download-only-031263 in cluster download-only-031263
	I0108 20:09:51.453865  638812 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:09:51.456191  638812 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:09:51.456215  638812 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:09:51.456368  638812 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:09:51.472901  638812 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:51.473035  638812 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:09:51.473053  638812 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:09:51.473058  638812 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:09:51.473073  638812 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:09:51.517427  638812 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0108 20:09:51.517449  638812 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:51.517607  638812 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:09:51.519902  638812 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 20:09:51.519926  638812 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0108 20:09:51.637084  638812 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-031263"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (18.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-031263 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-031263 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (18.720404421s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (18.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-031263
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-031263: exit status 85 (90.70598ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-031263 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-031263           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-031263 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-031263           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-031263 | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-031263           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:10:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:10:05.426621  638885 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:10:05.426745  638885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:05.426755  638885 out.go:309] Setting ErrFile to fd 2...
	I0108 20:10:05.426761  638885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:05.427185  638885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	W0108 20:10:05.427332  638885 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-633350/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-633350/.minikube/config/config.json: no such file or directory
	I0108 20:10:05.427595  638885 out.go:303] Setting JSON to true
	I0108 20:10:05.428517  638885 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10348,"bootTime":1704734258,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:10:05.428596  638885 start.go:138] virtualization:  
	I0108 20:10:05.431612  638885 out.go:97] [download-only-031263] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:10:05.434340  638885 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:10:05.431898  638885 notify.go:220] Checking for updates...
	I0108 20:10:05.436432  638885 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:10:05.438725  638885 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:10:05.440799  638885 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:10:05.442702  638885 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 20:10:05.446694  638885 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:10:05.447239  638885 config.go:182] Loaded profile config "download-only-031263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 20:10:05.447286  638885 start.go:810] api.Load failed for download-only-031263: filestore "download-only-031263": Docker machine "download-only-031263" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:10:05.447387  638885 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:10:05.447415  638885 start.go:810] api.Load failed for download-only-031263: filestore "download-only-031263": Docker machine "download-only-031263" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:10:05.472708  638885 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:10:05.472824  638885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:10:05.556139  638885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:10:05.545695054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:10:05.556241  638885 docker.go:295] overlay module found
	I0108 20:10:05.558539  638885 out.go:97] Using the docker driver based on existing profile
	I0108 20:10:05.558565  638885 start.go:298] selected driver: docker
	I0108 20:10:05.558573  638885 start.go:902] validating driver "docker" against &{Name:download-only-031263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-031263 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:05.558759  638885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:10:05.624794  638885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-08 20:10:05.615056525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:10:05.625251  638885 cni.go:84] Creating CNI manager for ""
	I0108 20:10:05.625272  638885 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:10:05.625285  638885 start_flags.go:323] config:
	{Name:download-only-031263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-031263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I0108 20:10:05.627637  638885 out.go:97] Starting control plane node download-only-031263 in cluster download-only-031263
	I0108 20:10:05.627661  638885 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:10:05.629864  638885 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:10:05.629888  638885 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 20:10:05.629914  638885 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:10:05.646721  638885 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:10:05.646869  638885 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:10:05.646896  638885 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:10:05.646902  638885 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:10:05.646914  638885 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:10:05.694241  638885 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0108 20:10:05.694268  638885 cache.go:56] Caching tarball of preloaded images
	I0108 20:10:05.695022  638885 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 20:10:05.697690  638885 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 20:10:05.697709  638885 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0108 20:10:05.821188  638885 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:307124b87428587d9288b24ec2db2592 -> /home/jenkins/minikube-integration/17907-633350/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-031263"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-031263
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-174483 --alsologtostderr --binary-mirror http://127.0.0.1:43009 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-174483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-174483
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-888287
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-888287: exit status 85 (105.763235ms)

                                                
                                                
-- stdout --
	* Profile "addons-888287" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-888287"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-888287
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-888287: exit status 85 (108.289049ms)

                                                
                                                
-- stdout --
	* Profile "addons-888287" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-888287"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (156.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-888287 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-888287 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m36.914573169s)
--- PASS: TestAddons/Setup (156.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 40.765288ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nxnl7" [9bd9f646-a0f5-4c14-83ae-cab1b85ed7d3] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006178459s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-njg8r" [ba0eee07-5e03-407b-a6dd-59f46344bb4c] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006607333s
addons_test.go:340: (dbg) Run:  kubectl --context addons-888287 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-888287 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-888287 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.676848746s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 ip
2024/01/08 20:13:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-v6jfp" [d293974f-635c-42ed-bf13-9b35dbb81dfc] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006802805s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-888287
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-888287: (5.967808427s)
--- PASS: TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.532335ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-q2j5d" [341d573d-1afd-4a0f-a2e3-1e9e775a827a] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00472344s
addons_test.go:415: (dbg) Run:  kubectl --context addons-888287 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 9.996816ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-888287 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-888287 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d412b3c8-2db9-4448-baf4-dd08fa0e7125] Pending
helpers_test.go:344: "task-pv-pod" [d412b3c8-2db9-4448-baf4-dd08fa0e7125] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d412b3c8-2db9-4448-baf4-dd08fa0e7125] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003880708s
addons_test.go:584: (dbg) Run:  kubectl --context addons-888287 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-888287 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-888287 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-888287 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-888287 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-888287 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-888287 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [15f5ed86-594e-4d51-bb16-fc325aba8795] Pending
helpers_test.go:344: "task-pv-pod-restore" [15f5ed86-594e-4d51-bb16-fc325aba8795] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [15f5ed86-594e-4d51-bb16-fc325aba8795] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004805941s
addons_test.go:626: (dbg) Run:  kubectl --context addons-888287 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-888287 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-888287 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-888287 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.845552947s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-888287 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-888287 --alsologtostderr -v=1: (1.529974738s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-mdbs6" [de27d100-7f68-48a1-8f2c-9b2f691b7df1] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-mdbs6" [de27d100-7f68-48a1-8f2c-9b2f691b7df1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-mdbs6" [de27d100-7f68-48a1-8f2c-9b2f691b7df1] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003490228s
--- PASS: TestAddons/parallel/Headlamp (14.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-hbh8l" [4b4bedd2-181f-43a0-ab43-a51a271f255e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004289149s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-888287
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-888287 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-888287 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-888287 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ba71c43e-92a4-4bc5-8fba-c1073ff6f334] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ba71c43e-92a4-4bc5-8fba-c1073ff6f334] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ba71c43e-92a4-4bc5-8fba-c1073ff6f334] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004702038s
addons_test.go:891: (dbg) Run:  kubectl --context addons-888287 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 ssh "cat /opt/local-path-provisioner/pvc-eac25f32-f886-438b-b976-f4205af199ef_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-888287 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-888287 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-888287 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-888287 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.55122058s)
--- PASS: TestAddons/parallel/LocalPath (52.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-59965" [7ab86f51-f03a-4328-b2ac-dedbb03d23dd] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00437542s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-888287
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-nr6ck" [33f6e3f2-22f4-4f68-96a0-d1a29e69af98] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003686322s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-888287 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-888287 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-888287
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-888287: (12.031270495s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-888287
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-888287
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-888287
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (34.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-224313 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-224313 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.956373443s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-224313 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-224313 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-224313 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-224313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-224313
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-224313: (2.05878925s)
--- PASS: TestCertOptions (34.74s)

                                                
                                    
x
+
TestCertExpiration (244.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-953889 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-953889 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.055639776s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-953889 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-953889 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.355439811s)
helpers_test.go:175: Cleaning up "cert-expiration-953889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-953889
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-953889: (2.64885896s)
--- PASS: TestCertExpiration (244.06s)

                                                
                                    
x
+
TestForceSystemdFlag (41.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-980184 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-980184 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.180439424s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-980184 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-980184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-980184
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-980184: (2.609584898s)
--- PASS: TestForceSystemdFlag (41.21s)

                                                
                                    
x
+
TestForceSystemdEnv (41.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-711530 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-711530 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.630131557s)
helpers_test.go:175: Cleaning up "force-systemd-env-711530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-711530
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-711530: (2.840158004s)
--- PASS: TestForceSystemdEnv (41.47s)

                                                
                                    
x
+
TestErrorSpam/setup (32.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-220241 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-220241 --driver=docker  --container-runtime=crio
E0108 20:18:03.075606  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:03.082299  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:03.092523  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:03.112764  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:03.153022  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:03.233373  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:03.393744  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:03.714340  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:04.355175  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:05.635367  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:08.195594  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:13.316466  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:18:23.557000  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-220241 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-220241 --driver=docker  --container-runtime=crio: (32.26171661s)
--- PASS: TestErrorSpam/setup (32.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 pause
--- PASS: TestErrorSpam/pause (1.88s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 stop: (1.256866796s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220241 --log_dir /tmp/nospam-220241 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17907-633350/.minikube/files/etc/test/nested/copy/638732/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735851 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0108 20:18:44.037207  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:19:24.997416  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-735851 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.077826853s)
--- PASS: TestFunctional/serial/StartWithProxy (77.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735851 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-735851 --alsologtostderr -v=8: (33.653532058s)
functional_test.go:659: soft start took 33.657377447s for "functional-735851" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-735851 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 cache add registry.k8s.io/pause:3.1: (1.262414408s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 cache add registry.k8s.io/pause:3.3: (1.322319057s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 cache add registry.k8s.io/pause:latest: (1.210723711s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-735851 /tmp/TestFunctionalserialCacheCmdcacheadd_local146614972/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cache add minikube-local-cache-test:functional-735851
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cache delete minikube-local-cache-test:functional-735851
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-735851
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (329.193104ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 cache reload: (1.099080657s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 kubectl -- --context functional-735851 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-735851 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735851 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 20:20:46.918531  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-735851 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.012396444s)
functional_test.go:757: restart took 34.012510225s for "functional-735851" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-735851 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 logs: (1.810410949s)
--- PASS: TestFunctional/serial/LogsCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 logs --file /tmp/TestFunctionalserialLogsFileCmd1784932839/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 logs --file /tmp/TestFunctionalserialLogsFileCmd1784932839/001/logs.txt: (1.829613837s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-735851 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-735851
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-735851: exit status 115 (446.455547ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31740 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-735851 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 config get cpus: exit status 14 (80.287234ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 config get cpus: exit status 14 (89.285042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-735851 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-735851 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 664540: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735851 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-735851 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (217.55443ms)

                                                
                                                
-- stdout --
	* [functional-735851] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:22:01.155373  663729 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:22:01.155601  663729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:22:01.155636  663729 out.go:309] Setting ErrFile to fd 2...
	I0108 20:22:01.155671  663729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:22:01.156079  663729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:22:01.156763  663729 out.go:303] Setting JSON to false
	I0108 20:22:01.158064  663729 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11064,"bootTime":1704734258,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:22:01.158181  663729 start.go:138] virtualization:  
	I0108 20:22:01.161135  663729 out.go:177] * [functional-735851] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:22:01.163947  663729 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:22:01.165896  663729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:22:01.164097  663729 notify.go:220] Checking for updates...
	I0108 20:22:01.168301  663729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:22:01.170518  663729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:22:01.172651  663729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:22:01.174575  663729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:22:01.176922  663729 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:22:01.177446  663729 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:22:01.205237  663729 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:22:01.205364  663729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:22:01.289149  663729 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-08 20:22:01.279203966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:22:01.289247  663729 docker.go:295] overlay module found
	I0108 20:22:01.291543  663729 out.go:177] * Using the docker driver based on existing profile
	I0108 20:22:01.293512  663729 start.go:298] selected driver: docker
	I0108 20:22:01.293528  663729 start.go:902] validating driver "docker" against &{Name:functional-735851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-735851 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:22:01.293631  663729 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:22:01.296255  663729 out.go:177] 
	W0108 20:22:01.298196  663729 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 20:22:01.300127  663729 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735851 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735851 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-735851 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (258.218701ms)

                                                
                                                
-- stdout --
	* [functional-735851] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:22:04.255503  664316 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:22:04.255866  664316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:22:04.255891  664316 out.go:309] Setting ErrFile to fd 2...
	I0108 20:22:04.255912  664316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:22:04.257985  664316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:22:04.258484  664316 out.go:303] Setting JSON to false
	I0108 20:22:04.259544  664316 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11067,"bootTime":1704734258,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:22:04.259647  664316 start.go:138] virtualization:  
	I0108 20:22:04.262814  664316 out.go:177] * [functional-735851] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0108 20:22:04.266534  664316 notify.go:220] Checking for updates...
	I0108 20:22:04.270552  664316 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:22:04.272938  664316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:22:04.274967  664316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:22:04.277205  664316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:22:04.279363  664316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:22:04.281433  664316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:22:04.284165  664316 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:22:04.284765  664316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:22:04.311331  664316 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:22:04.311461  664316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:22:04.413480  664316 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-08 20:22:04.403516314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:22:04.413574  664316 docker.go:295] overlay module found
	I0108 20:22:04.415780  664316 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 20:22:04.417736  664316 start.go:298] selected driver: docker
	I0108 20:22:04.417753  664316 start.go:902] validating driver "docker" against &{Name:functional-735851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-735851 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:22:04.417875  664316 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:22:04.420476  664316 out.go:177] 
	W0108 20:22:04.422894  664316 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 20:22:04.425396  664316 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-735851 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-735851 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hjzwc" [c0274b57-608e-4741-bad1-9f40b78503c8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hjzwc" [c0274b57-608e-4741-bad1-9f40b78503c8] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003703127s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30218
functional_test.go:1674: http://192.168.49.2:30218: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-hjzwc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30218
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7ade65f9-de3c-4127-a2ff-c3b0680565a2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004978751s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-735851 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-735851 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-735851 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-735851 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-735851 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c69a477-b38d-414e-a163-dd13c5fd81ae] Pending
helpers_test.go:344: "sp-pod" [1c69a477-b38d-414e-a163-dd13c5fd81ae] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1c69a477-b38d-414e-a163-dd13c5fd81ae] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004086276s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-735851 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-735851 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-735851 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9dab643d-e4da-45f8-b58c-b36fd5eeb46f] Pending
helpers_test.go:344: "sp-pod" [9dab643d-e4da-45f8-b58c-b36fd5eeb46f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004785931s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-735851 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh -n functional-735851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cp functional-735851:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2920240445/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh -n functional-735851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh -n functional-735851 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/638732/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo cat /etc/test/nested/copy/638732/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/638732.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo cat /etc/ssl/certs/638732.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/638732.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo cat /usr/share/ca-certificates/638732.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/6387322.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo cat /etc/ssl/certs/6387322.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/6387322.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo cat /usr/share/ca-certificates/6387322.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-735851 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 ssh "sudo systemctl is-active docker": exit status 1 (433.950594ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 ssh "sudo systemctl is-active containerd": exit status 1 (393.561678ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 version -o=json --components: (1.444453992s)
--- PASS: TestFunctional/parallel/Version/components (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls --format short --alsologtostderr
2024/01/08 20:22:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735851 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-735851
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735851 image ls --format short --alsologtostderr:
I0108 20:22:16.373740  665687 out.go:296] Setting OutFile to fd 1 ...
I0108 20:22:16.373903  665687 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:16.373910  665687 out.go:309] Setting ErrFile to fd 2...
I0108 20:22:16.373917  665687 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:16.374255  665687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
I0108 20:22:16.374990  665687 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:16.375213  665687 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:16.375831  665687 cli_runner.go:164] Run: docker container inspect functional-735851 --format={{.State.Status}}
I0108 20:22:16.394832  665687 ssh_runner.go:195] Run: systemctl --version
I0108 20:22:16.394892  665687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735851
I0108 20:22:16.418622  665687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/functional-735851/id_rsa Username:docker}
I0108 20:22:16.536040  665687 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735851 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| docker.io/library/nginx                 | alpine             | 74077e780ec71 | 45.3MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| gcr.io/google-containers/addon-resizer  | functional-735851  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | latest             | 8aea65d81da20 | 196MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735851 image ls --format table --alsologtostderr:
I0108 20:22:17.284101  665839 out.go:296] Setting OutFile to fd 1 ...
I0108 20:22:17.284286  665839 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:17.284300  665839 out.go:309] Setting ErrFile to fd 2...
I0108 20:22:17.284308  665839 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:17.284647  665839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
I0108 20:22:17.285404  665839 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:17.285602  665839 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:17.286184  665839 cli_runner.go:164] Run: docker container inspect functional-735851 --format={{.State.Status}}
I0108 20:22:17.318136  665839 ssh_runner.go:195] Run: systemctl --version
I0108 20:22:17.318190  665839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735851
I0108 20:22:17.342684  665839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/functional-735851/id_rsa Username:docker}
I0108 20:22:17.452923  665839 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735851 image ls --format json --alsologtostderr:
[{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDigests":["docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45330189"},{"id":"8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684"],"repoTags
":["docker.io/library/nginx:latest"],"size":"196113558"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388
d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests"
:["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"05c284
c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","
repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-735851"],"size":"34114467"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763
e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735851 image ls --format json --alsologtostderr:
I0108 20:22:16.979025  665769 out.go:296] Setting OutFile to fd 1 ...
I0108 20:22:16.979209  665769 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:16.979215  665769 out.go:309] Setting ErrFile to fd 2...
I0108 20:22:16.979221  665769 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:16.979498  665769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
I0108 20:22:16.980175  665769 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:16.980304  665769 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:16.980902  665769 cli_runner.go:164] Run: docker container inspect functional-735851 --format={{.State.Status}}
I0108 20:22:17.001437  665769 ssh_runner.go:195] Run: systemctl --version
I0108 20:22:17.001502  665769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735851
I0108 20:22:17.033882  665769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/functional-735851/id_rsa Username:docker}
I0108 20:22:17.136459  665769 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735851 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "45330189"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-735851
size: "34114467"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684
repoTags:
- docker.io/library/nginx:latest
size: "196113558"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735851 image ls --format yaml --alsologtostderr:
I0108 20:22:16.653958  665717 out.go:296] Setting OutFile to fd 1 ...
I0108 20:22:16.654139  665717 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:16.654151  665717 out.go:309] Setting ErrFile to fd 2...
I0108 20:22:16.654157  665717 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:16.654452  665717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
I0108 20:22:16.655188  665717 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:16.655332  665717 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:16.655838  665717 cli_runner.go:164] Run: docker container inspect functional-735851 --format={{.State.Status}}
I0108 20:22:16.674634  665717 ssh_runner.go:195] Run: systemctl --version
I0108 20:22:16.674692  665717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735851
I0108 20:22:16.695068  665717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/functional-735851/id_rsa Username:docker}
I0108 20:22:16.792466  665717 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 ssh pgrep buildkitd: exit status 1 (387.099976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image build -t localhost/my-image:functional-735851 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 image build -t localhost/my-image:functional-735851 testdata/build --alsologtostderr: (2.201493108s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735851 image build -t localhost/my-image:functional-735851 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1d69b434f4f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-735851
--> 47500da5201
Successfully tagged localhost/my-image:functional-735851
47500da520165322737a9708fa66effc222c6ae59057e085fbd771e9a641d51b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735851 image build -t localhost/my-image:functional-735851 testdata/build --alsologtostderr:
I0108 20:22:17.144376  665812 out.go:296] Setting OutFile to fd 1 ...
I0108 20:22:17.145173  665812 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:17.145212  665812 out.go:309] Setting ErrFile to fd 2...
I0108 20:22:17.145235  665812 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:22:17.145566  665812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
I0108 20:22:17.146460  665812 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:17.149033  665812 config.go:182] Loaded profile config "functional-735851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:22:17.149751  665812 cli_runner.go:164] Run: docker container inspect functional-735851 --format={{.State.Status}}
I0108 20:22:17.170832  665812 ssh_runner.go:195] Run: systemctl --version
I0108 20:22:17.170902  665812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735851
I0108 20:22:17.199214  665812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/functional-735851/id_rsa Username:docker}
I0108 20:22:17.300639  665812 build_images.go:151] Building image from path: /tmp/build.3328368641.tar
I0108 20:22:17.300708  665812 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 20:22:17.313419  665812 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3328368641.tar
I0108 20:22:17.321274  665812 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3328368641.tar: stat -c "%s %y" /var/lib/minikube/build/build.3328368641.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3328368641.tar': No such file or directory
I0108 20:22:17.321299  665812 ssh_runner.go:362] scp /tmp/build.3328368641.tar --> /var/lib/minikube/build/build.3328368641.tar (3072 bytes)
I0108 20:22:17.357951  665812 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3328368641
I0108 20:22:17.368895  665812 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3328368641 -xf /var/lib/minikube/build/build.3328368641.tar
I0108 20:22:17.382124  665812 crio.go:297] Building image: /var/lib/minikube/build/build.3328368641
I0108 20:22:17.382196  665812 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-735851 /var/lib/minikube/build/build.3328368641 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0108 20:22:19.248683  665812 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-735851 /var/lib/minikube/build/build.3328368641 --cgroup-manager=cgroupfs: (1.86646374s)
I0108 20:22:19.248751  665812 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3328368641
I0108 20:22:19.259053  665812 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3328368641.tar
I0108 20:22:19.269387  665812 build_images.go:207] Built localhost/my-image:functional-735851 from /tmp/build.3328368641.tar
I0108 20:22:19.269416  665812 build_images.go:123] succeeded building to: functional-735851
I0108 20:22:19.269428  665812 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.684578151s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-735851
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image load --daemon gcr.io/google-containers/addon-resizer:functional-735851 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 image load --daemon gcr.io/google-containers/addon-resizer:functional-735851 --alsologtostderr: (4.595062711s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "396.179025ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "84.101075ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "495.405705ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "95.763956ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-735851 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-735851 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-735851 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 662149: os: process already finished
helpers_test.go:502: unable to terminate pid 662038: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-735851 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-735851 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-735851 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [70d8c42d-4f6d-43a2-bfe7-11d35429d40f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [70d8c42d-4f6d-43a2-bfe7-11d35429d40f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00433021s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image load --daemon gcr.io/google-containers/addon-resizer:functional-735851 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 image load --daemon gcr.io/google-containers/addon-resizer:functional-735851 --alsologtostderr: (3.558701706s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.479831283s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-735851
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image load --daemon gcr.io/google-containers/addon-resizer:functional-735851 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 image load --daemon gcr.io/google-containers/addon-resizer:functional-735851 --alsologtostderr: (3.683688549s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image save gcr.io/google-containers/addon-resizer:functional-735851 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-735851 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.173.133 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-735851 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image rm gcr.io/google-containers/addon-resizer:functional-735851 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-735851 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.030392473s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-735851
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 image save --daemon gcr.io/google-containers/addon-resizer:functional-735851 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-735851
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-735851 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-735851 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-mrdcl" [b06fe6f4-cceb-4788-be7d-920290c91a76] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-mrdcl" [b06fe6f4-cceb-4788-be7d-920290c91a76] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004030561s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 service list -o json
functional_test.go:1493: Took "549.829146ms" to run "out/minikube-linux-arm64 -p functional-735851 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32638
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32638
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdany-port166663511/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704745321604281743" to /tmp/TestFunctionalparallelMountCmdany-port166663511/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704745321604281743" to /tmp/TestFunctionalparallelMountCmdany-port166663511/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704745321604281743" to /tmp/TestFunctionalparallelMountCmdany-port166663511/001/test-1704745321604281743
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.973408ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 20:22 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 20:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 20:22 test-1704745321604281743
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh cat /mount-9p/test-1704745321604281743
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-735851 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [92a00947-a971-4fde-b0c8-c0c8c40af440] Pending
helpers_test.go:344: "busybox-mount" [92a00947-a971-4fde-b0c8-c0c8c40af440] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [92a00947-a971-4fde-b0c8-c0c8c40af440] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [92a00947-a971-4fde-b0c8-c0c8c40af440] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008636s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-735851 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdany-port166663511/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdspecific-port3497704030/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdspecific-port3497704030/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 ssh "sudo umount -f /mount-9p": exit status 1 (334.679715ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-735851 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdspecific-port3497704030/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1713408707/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1713408707/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1713408707/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T" /mount1: exit status 1 (1.245087661s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-735851 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-735851 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1713408707/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1713408707/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735851 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1713408707/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.26s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-735851
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-735851
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-735851
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (96.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-105176 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0108 20:23:03.070880  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:23:30.759060  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-105176 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m36.265037415s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (96.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-105176 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-105176 addons enable ingress --alsologtostderr -v=5: (11.49446633s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-105176 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-748383 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0108 20:27:48.299400  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:28:03.070898  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-748383 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (53.680163506s)
--- PASS: TestJSONOutput/start/Command (53.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-748383 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-748383 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-748383 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-748383 --output=json --user=testUser: (5.989728622s)
--- PASS: TestJSONOutput/stop/Command (5.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-634334 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-634334 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.306308ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c094db81-1ec1-4345-8b69-810af55361f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-634334] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbeb4b22-d4e5-4243-8d0d-b5ee77ff6768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17907"}}
	{"specversion":"1.0","id":"08a86f48-5bda-42d3-911a-057da001ea33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"21d0e2d0-ee9b-4f94-a2f6-c28bc42eb91e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig"}}
	{"specversion":"1.0","id":"e37103eb-d173-47be-a11b-da0a2562ef14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube"}}
	{"specversion":"1.0","id":"770b9528-4d77-44e6-94d2-e3eae0b45f3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e6ebe68a-b39e-44b2-9dd0-5907560ee91c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c1edef4b-620c-45a2-92b6-2db15c77fd0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-634334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-634334
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (51.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-002347 --network=
E0108 20:29:10.220292  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:29:11.027309  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:11.032584  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:11.042855  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:11.063191  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:11.103527  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:11.241500  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:11.401960  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:11.722531  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:12.363477  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:13.643684  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:16.204717  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:21.325707  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:29:31.565917  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-002347 --network=: (49.099085031s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-002347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-002347
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-002347: (2.134838866s)
--- PASS: TestKicCustomNetwork/create_custom_network (51.26s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-656718 --network=bridge
E0108 20:29:52.046104  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-656718 --network=bridge: (31.51906286s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-656718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-656718
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-656718: (2.026072346s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.57s)

                                                
                                    
x
+
TestKicExistingNetwork (31.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-814600 --network=existing-network
E0108 20:30:33.006356  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-814600 --network=existing-network: (29.523437175s)
helpers_test.go:175: Cleaning up "existing-network-814600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-814600
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-814600: (2.011320894s)
--- PASS: TestKicExistingNetwork (31.69s)

                                                
                                    
x
+
TestKicCustomSubnet (36.19s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-947785 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-947785 --subnet=192.168.60.0/24: (34.052064274s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-947785 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-947785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-947785
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-947785: (2.117342433s)
--- PASS: TestKicCustomSubnet (36.19s)

                                                
                                    
x
+
TestKicStaticIP (38.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-780721 --static-ip=192.168.200.200
E0108 20:31:26.377884  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:31:54.060586  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:31:54.926553  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-780721 --static-ip=192.168.200.200: (36.255877127s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-780721 ip
helpers_test.go:175: Cleaning up "static-ip-780721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-780721
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-780721: (2.156323691s)
--- PASS: TestKicStaticIP (38.59s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-707232 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-707232 --driver=docker  --container-runtime=crio: (30.800359248s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-710178 --driver=docker  --container-runtime=crio
E0108 20:33:03.070548  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-710178 --driver=docker  --container-runtime=crio: (33.684342407s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-707232
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-710178
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-710178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-710178
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-710178: (2.012871772s)
helpers_test.go:175: Cleaning up "first-707232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-707232
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-707232: (2.014428703s)
--- PASS: TestMinikubeProfile (69.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-347420 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-347420 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.129098103s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-347420 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-349299 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-349299 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.400744495s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-349299 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-347420 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-347420 --alsologtostderr -v=5: (1.676336376s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-349299 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-349299
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-349299: (1.224153075s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-349299
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-349299: (6.803833456s)
--- PASS: TestMountStart/serial/RestartStopped (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-349299 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-933566 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0108 20:34:11.026606  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:34:26.119836  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:34:38.766757  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-933566 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.994598598s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-933566 -- rollout status deployment/busybox: (3.886009921s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-lxnll -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-zsk76 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-lxnll -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-zsk76 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-lxnll -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-933566 -- exec busybox-5bc68d56bd-zsk76 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-933566 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-933566 -v 3 --alsologtostderr: (48.242990938s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-933566 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp testdata/cp-test.txt multinode-933566:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3780639016/001/cp-test_multinode-933566.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566:/home/docker/cp-test.txt multinode-933566-m02:/home/docker/cp-test_multinode-933566_multinode-933566-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m02 "sudo cat /home/docker/cp-test_multinode-933566_multinode-933566-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566:/home/docker/cp-test.txt multinode-933566-m03:/home/docker/cp-test_multinode-933566_multinode-933566-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m03 "sudo cat /home/docker/cp-test_multinode-933566_multinode-933566-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp testdata/cp-test.txt multinode-933566-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3780639016/001/cp-test_multinode-933566-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566-m02:/home/docker/cp-test.txt multinode-933566:/home/docker/cp-test_multinode-933566-m02_multinode-933566.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566 "sudo cat /home/docker/cp-test_multinode-933566-m02_multinode-933566.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566-m02:/home/docker/cp-test.txt multinode-933566-m03:/home/docker/cp-test_multinode-933566-m02_multinode-933566-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m03 "sudo cat /home/docker/cp-test_multinode-933566-m02_multinode-933566-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp testdata/cp-test.txt multinode-933566-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3780639016/001/cp-test_multinode-933566-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566-m03:/home/docker/cp-test.txt multinode-933566:/home/docker/cp-test_multinode-933566-m03_multinode-933566.txt
E0108 20:36:26.377605  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566 "sudo cat /home/docker/cp-test_multinode-933566-m03_multinode-933566.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 cp multinode-933566-m03:/home/docker/cp-test.txt multinode-933566-m02:/home/docker/cp-test_multinode-933566-m03_multinode-933566-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 ssh -n multinode-933566-m02 "sudo cat /home/docker/cp-test_multinode-933566-m03_multinode-933566-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-933566 node stop m03: (1.221435966s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-933566 status: exit status 7 (580.042637ms)

                                                
                                                
-- stdout --
	multinode-933566
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-933566-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-933566-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-933566 status --alsologtostderr: exit status 7 (576.138683ms)

                                                
                                                
-- stdout --
	multinode-933566
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-933566-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-933566-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:36:30.336437  712246 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:36:30.336617  712246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:36:30.336648  712246 out.go:309] Setting ErrFile to fd 2...
	I0108 20:36:30.336669  712246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:36:30.336956  712246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:36:30.337177  712246 out.go:303] Setting JSON to false
	I0108 20:36:30.337306  712246 mustload.go:65] Loading cluster: multinode-933566
	I0108 20:36:30.337381  712246 notify.go:220] Checking for updates...
	I0108 20:36:30.337793  712246 config.go:182] Loaded profile config "multinode-933566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:36:30.337832  712246 status.go:255] checking status of multinode-933566 ...
	I0108 20:36:30.338668  712246 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Status}}
	I0108 20:36:30.357957  712246 status.go:330] multinode-933566 host status = "Running" (err=<nil>)
	I0108 20:36:30.357993  712246 host.go:66] Checking if "multinode-933566" exists ...
	I0108 20:36:30.358284  712246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933566
	I0108 20:36:30.377477  712246 host.go:66] Checking if "multinode-933566" exists ...
	I0108 20:36:30.377824  712246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:36:30.377871  712246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566
	I0108 20:36:30.408137  712246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566/id_rsa Username:docker}
	I0108 20:36:30.507542  712246 ssh_runner.go:195] Run: systemctl --version
	I0108 20:36:30.513015  712246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:36:30.526462  712246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:36:30.611577  712246 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-08 20:36:30.601994037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:36:30.612178  712246 kubeconfig.go:92] found "multinode-933566" server: "https://192.168.58.2:8443"
	I0108 20:36:30.612204  712246 api_server.go:166] Checking apiserver status ...
	I0108 20:36:30.612248  712246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:36:30.625263  712246 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1273/cgroup
	I0108 20:36:30.636647  712246 api_server.go:182] apiserver freezer: "7:freezer:/docker/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/crio/crio-06e3ce0f154f824bb05b8f6c6d05bf3a10e00457044fe56794b850c7a00b48c5"
	I0108 20:36:30.636722  712246 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/21d6edc8691bb2b60d1720def2f012d16584a71959435035c4625be32f0c36cb/crio/crio-06e3ce0f154f824bb05b8f6c6d05bf3a10e00457044fe56794b850c7a00b48c5/freezer.state
	I0108 20:36:30.646979  712246 api_server.go:204] freezer state: "THAWED"
	I0108 20:36:30.647007  712246 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 20:36:30.655698  712246 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 20:36:30.655727  712246 status.go:421] multinode-933566 apiserver status = Running (err=<nil>)
	I0108 20:36:30.655743  712246 status.go:257] multinode-933566 status: &{Name:multinode-933566 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:36:30.655762  712246 status.go:255] checking status of multinode-933566-m02 ...
	I0108 20:36:30.656090  712246 cli_runner.go:164] Run: docker container inspect multinode-933566-m02 --format={{.State.Status}}
	I0108 20:36:30.674095  712246 status.go:330] multinode-933566-m02 host status = "Running" (err=<nil>)
	I0108 20:36:30.674117  712246 host.go:66] Checking if "multinode-933566-m02" exists ...
	I0108 20:36:30.674431  712246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-933566-m02
	I0108 20:36:30.691967  712246 host.go:66] Checking if "multinode-933566-m02" exists ...
	I0108 20:36:30.692338  712246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:36:30.692388  712246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-933566-m02
	I0108 20:36:30.710235  712246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/17907-633350/.minikube/machines/multinode-933566-m02/id_rsa Username:docker}
	I0108 20:36:30.804856  712246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:36:30.818482  712246 status.go:257] multinode-933566-m02 status: &{Name:multinode-933566-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:36:30.818519  712246 status.go:255] checking status of multinode-933566-m03 ...
	I0108 20:36:30.818852  712246 cli_runner.go:164] Run: docker container inspect multinode-933566-m03 --format={{.State.Status}}
	I0108 20:36:30.836460  712246 status.go:330] multinode-933566-m03 host status = "Stopped" (err=<nil>)
	I0108 20:36:30.836486  712246 status.go:343] host is not running, skipping remaining checks
	I0108 20:36:30.836494  712246 status.go:257] multinode-933566-m03 status: &{Name:multinode-933566-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-933566 node start m03 --alsologtostderr: (11.826436974s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-933566
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-933566
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-933566: (24.972409157s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-933566 --wait=true -v=8 --alsologtostderr
E0108 20:38:03.070294  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-933566 --wait=true -v=8 --alsologtostderr: (1m38.36522012s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-933566
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-933566 node delete m03: (4.394589579s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 stop
E0108 20:39:11.026690  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-933566 stop: (23.735046495s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-933566 status: exit status 7 (105.705554ms)

                                                
                                                
-- stdout --
	multinode-933566
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-933566-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-933566 status --alsologtostderr: exit status 7 (99.699666ms)

                                                
                                                
-- stdout --
	multinode-933566
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-933566-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:39:16.056822  720542 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:39:16.057003  720542 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:39:16.057029  720542 out.go:309] Setting ErrFile to fd 2...
	I0108 20:39:16.057050  720542 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:39:16.057325  720542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:39:16.057536  720542 out.go:303] Setting JSON to false
	I0108 20:39:16.057651  720542 mustload.go:65] Loading cluster: multinode-933566
	I0108 20:39:16.057693  720542 notify.go:220] Checking for updates...
	I0108 20:39:16.058109  720542 config.go:182] Loaded profile config "multinode-933566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:39:16.058147  720542 status.go:255] checking status of multinode-933566 ...
	I0108 20:39:16.058683  720542 cli_runner.go:164] Run: docker container inspect multinode-933566 --format={{.State.Status}}
	I0108 20:39:16.077959  720542 status.go:330] multinode-933566 host status = "Stopped" (err=<nil>)
	I0108 20:39:16.077979  720542 status.go:343] host is not running, skipping remaining checks
	I0108 20:39:16.077985  720542 status.go:257] multinode-933566 status: &{Name:multinode-933566 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:39:16.078015  720542 status.go:255] checking status of multinode-933566-m02 ...
	I0108 20:39:16.078315  720542 cli_runner.go:164] Run: docker container inspect multinode-933566-m02 --format={{.State.Status}}
	I0108 20:39:16.096711  720542 status.go:330] multinode-933566-m02 host status = "Stopped" (err=<nil>)
	I0108 20:39:16.096730  720542 status.go:343] host is not running, skipping remaining checks
	I0108 20:39:16.096737  720542 status.go:257] multinode-933566-m02 status: &{Name:multinode-933566-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-933566 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-933566 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m25.041650356s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-933566 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.79s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-933566
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-933566-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-933566-m02 --driver=docker  --container-runtime=crio: exit status 14 (103.77931ms)

                                                
                                                
-- stdout --
	* [multinode-933566-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-933566-m02' is duplicated with machine name 'multinode-933566-m02' in profile 'multinode-933566'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-933566-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-933566-m03 --driver=docker  --container-runtime=crio: (32.108575207s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-933566
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-933566: exit status 80 (351.198532ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-933566
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-933566-m03 already exists in multinode-933566-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-933566-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-933566-m03: (1.980795715s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.61s)

                                                
                                    
x
+
TestPreload (171.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-708006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0108 20:41:26.377671  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-708006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.32450069s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-708006 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-708006 image pull gcr.io/k8s-minikube/busybox: (1.964057949s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-708006
E0108 20:42:49.420820  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-708006: (5.790216854s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-708006 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0108 20:43:03.070900  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 20:44:11.027241  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-708006 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m16.54366622s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-708006 image list
helpers_test.go:175: Cleaning up "test-preload-708006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-708006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-708006: (2.359031067s)
--- PASS: TestPreload (171.25s)

                                                
                                    
x
+
TestScheduledStopUnix (113.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-307122 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-307122 --memory=2048 --driver=docker  --container-runtime=crio: (37.072511043s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-307122 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-307122 -n scheduled-stop-307122
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-307122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-307122 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-307122 -n scheduled-stop-307122
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-307122
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-307122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0108 20:45:34.127865  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-307122
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-307122: exit status 7 (89.534714ms)

                                                
                                                
-- stdout --
	scheduled-stop-307122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-307122 -n scheduled-stop-307122
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-307122 -n scheduled-stop-307122: exit status 7 (87.48249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-307122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-307122
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-307122: (5.014144546s)
--- PASS: TestScheduledStopUnix (113.86s)

                                                
                                    
x
+
TestInsufficientStorage (11.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-617925 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-617925 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.470813075s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a768cec-8d20-4235-b23c-effc34754219","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-617925] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d67aa61-dcf7-4a51-90f7-ac44aa8fb751","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17907"}}
	{"specversion":"1.0","id":"9ea83dde-c15f-495a-b9e6-beee40622edd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c5de3359-e7ca-479a-86d9-3a3969771d6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig"}}
	{"specversion":"1.0","id":"6714bf84-7160-4df5-860a-4d15d060e06e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube"}}
	{"specversion":"1.0","id":"41759f87-680e-44d7-b7f7-c5722966b088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d4cb93a5-a8cf-4a0e-954c-c78839b5b95c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80804557-025f-4619-8bf8-0a37af461d28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"030509ad-0655-4218-8a4d-4a38405b2d2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"edcb30ca-704d-4aa4-9f03-bfdfb7afd139","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d01aff3e-030e-4642-b2db-67b9c78d4ae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"68ac04f2-8a0f-4db6-876e-62c8588ec2b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-617925 in cluster insufficient-storage-617925","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c13c649-75dd-4327-ac94-5fe6cbb2cf1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"73c62801-235a-457a-a03a-5c4c331dd0f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d1e078a-6d54-43f0-8aa2-fd16d009a4ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-617925 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-617925 --output=json --layout=cluster: exit status 7 (337.74006ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-617925","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-617925","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:46:16.845386  737190 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-617925" does not appear in /home/jenkins/minikube-integration/17907-633350/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-617925 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-617925 --output=json --layout=cluster: exit status 7 (317.557374ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-617925","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-617925","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:46:17.165233  737242 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-617925" does not appear in /home/jenkins/minikube-integration/17907-633350/kubeconfig
	E0108 20:46:17.177308  737242 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/insufficient-storage-617925/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-617925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-617925
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-617925: (1.955097928s)
--- PASS: TestInsufficientStorage (11.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (396.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-987659 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0108 20:48:03.073507  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-987659 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.505436862s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-987659
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-987659: (1.456369183s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-987659 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-987659 status --format={{.Host}}: exit status 7 (120.874081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-987659 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0108 20:49:11.027225  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-987659 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m43.802627714s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-987659 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-987659 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-987659 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (97.776305ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-987659] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-987659
	    minikube start -p kubernetes-upgrade-987659 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9876592 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-987659 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-987659 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-987659 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.382154561s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-987659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-987659
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-987659: (2.478617299s)
--- PASS: TestKubernetesUpgrade (396.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-804725 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-804725 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (104.106034ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-804725] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-804725 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-804725 --driver=docker  --container-runtime=crio: (43.984078303s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-804725 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-804725 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-804725 --no-kubernetes --driver=docker  --container-runtime=crio: (9.233391555s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-804725 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-804725 status -o json: exit status 2 (409.779796ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-804725","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-804725
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-804725: (2.475497025s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-804725 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-804725 --no-kubernetes --driver=docker  --container-runtime=crio: (8.784702159s)
--- PASS: TestNoKubernetes/serial/Start (8.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-804725 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-804725 "sudo systemctl is-active --quiet service kubelet": exit status 1 (427.602195ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-804725
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-804725: (1.254713252s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-804725 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-804725 --driver=docker  --container-runtime=crio: (8.122768889s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-804725 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-804725 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.076299ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-272282
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestPause/serial/Start (81.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-646566 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0108 20:53:03.070866  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-646566 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.392893204s)
--- PASS: TestPause/serial/Start (81.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-646566 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0108 20:54:11.026766  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-646566 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.846481712s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.88s)

                                                
                                    
x
+
TestPause/serial/Pause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-646566 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-646566 --alsologtostderr -v=5: (1.028676379s)
--- PASS: TestPause/serial/Pause (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-646566 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-646566 --output=json --layout=cluster: exit status 2 (547.45136ms)

                                                
                                                
-- stdout --
	{"Name":"pause-646566","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-646566","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.55s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.24s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-646566 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-646566 --alsologtostderr -v=5: (1.244665388s)
--- PASS: TestPause/serial/Unpause (1.24s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-646566 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-646566 --alsologtostderr -v=5: (1.715191673s)
--- PASS: TestPause/serial/PauseAgain (1.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.56s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-646566 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-646566 --alsologtostderr -v=5: (3.561439782s)
--- PASS: TestPause/serial/DeletePaused (3.56s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-646566
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-646566: exit status 1 (26.207266ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-646566: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-595882 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-595882 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (259.111028ms)

                                                
                                                
-- stdout --
	* [false-595882] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:54:53.980705  776319 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:54:53.980877  776319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:54:53.980889  776319 out.go:309] Setting ErrFile to fd 2...
	I0108 20:54:53.980896  776319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:54:53.981152  776319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-633350/.minikube/bin
	I0108 20:54:53.981577  776319 out.go:303] Setting JSON to false
	I0108 20:54:53.982517  776319 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13036,"bootTime":1704734258,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0108 20:54:53.982590  776319 start.go:138] virtualization:  
	I0108 20:54:53.985549  776319 out.go:177] * [false-595882] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 20:54:53.988188  776319 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:54:53.990206  776319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:54:53.988388  776319 notify.go:220] Checking for updates...
	I0108 20:54:53.994000  776319 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-633350/kubeconfig
	I0108 20:54:53.996723  776319 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-633350/.minikube
	I0108 20:54:53.998685  776319 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 20:54:54.000996  776319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:54:54.004080  776319 config.go:182] Loaded profile config "force-systemd-flag-980184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:54:54.004230  776319 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:54:54.030189  776319 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:54:54.030312  776319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:54:54.119966  776319 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 20:54:54.109087209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 20:54:54.120079  776319 docker.go:295] overlay module found
	I0108 20:54:54.123827  776319 out.go:177] * Using the docker driver based on user configuration
	I0108 20:54:54.125826  776319 start.go:298] selected driver: docker
	I0108 20:54:54.125842  776319 start.go:902] validating driver "docker" against <nil>
	I0108 20:54:54.125856  776319 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:54:54.128262  776319 out.go:177] 
	W0108 20:54:54.130238  776319 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0108 20:54:54.132067  776319 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-595882 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-595882" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-595882

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-595882"

                                                
                                                
----------------------- debugLogs end: false-595882 [took: 4.807231501s] --------------------------------
helpers_test.go:175: Cleaning up "false-595882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-595882
--- PASS: TestNetworkPlugins/group/false (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-482542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0108 20:56:26.377951  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 20:58:03.070896  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-482542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m14.793653192s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-482542 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ad438c2a-33f6-4e51-bc57-360f1e2a715f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ad438c2a-33f6-4e51-bc57-360f1e2a715f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003410602s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-482542 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-482542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-482542 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-482542 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-482542 --alsologtostderr -v=3: (11.992894948s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482542 -n old-k8s-version-482542
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482542 -n old-k8s-version-482542: exit status 7 (89.242044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-482542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (449.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-482542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-482542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m28.891114075s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482542 -n old-k8s-version-482542
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (449.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-784505 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 20:59:11.032836  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 20:59:29.421040  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-784505 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m5.538930124s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-784505 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4f853ce4-f81b-4fef-9e5c-354900f1b8be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4f853ce4-f81b-4fef-9e5c-354900f1b8be] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00368777s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-784505 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-784505 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-784505 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.018629018s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-784505 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-784505 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-784505 --alsologtostderr -v=3: (11.988117371s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-784505 -n no-preload-784505
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-784505 -n no-preload-784505: exit status 7 (88.172675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-784505 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (620.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-784505 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 21:01:26.378534  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 21:02:14.128072  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 21:03:03.070094  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 21:04:11.026899  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-784505 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m19.821278305s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-784505 -n no-preload-784505
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (620.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bbd5x" [50dfe7f8-a46d-4ea6-81b6-f18a1d9d1dd5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003268658s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bbd5x" [50dfe7f8-a46d-4ea6-81b6-f18a1d9d1dd5] Running
E0108 21:06:26.378533  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003973618s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-482542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-482542 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-482542 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-482542 -n old-k8s-version-482542
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-482542 -n old-k8s-version-482542: exit status 2 (352.586382ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-482542 -n old-k8s-version-482542
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-482542 -n old-k8s-version-482542: exit status 2 (367.072073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-482542 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-482542 -n old-k8s-version-482542
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-482542 -n old-k8s-version-482542
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-704697 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:07:46.121341  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-704697 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m18.630334193s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-704697 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [10133446-74c9-4247-8692-da83b25dc86e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [10133446-74c9-4247-8692-da83b25dc86e] Running
E0108 21:08:03.070240  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004491236s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-704697 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-704697 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-704697 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.119078582s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-704697 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-704697 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-704697 --alsologtostderr -v=3: (12.048151082s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-704697 -n embed-certs-704697
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-704697 -n embed-certs-704697: exit status 7 (90.440855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-704697 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (598.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-704697 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:08:27.243914  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:27.249130  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:27.260009  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:27.280234  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:27.321098  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:27.401362  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:27.561505  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:27.881650  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:28.521902  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:29.802279  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:32.362552  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:37.482860  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:08:47.723064  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:09:08.203297  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:09:11.026631  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 21:09:49.163658  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-704697 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (9m58.10490587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-704697 -n embed-certs-704697
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (598.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tbfd9" [fe21bd3f-f589-44f5-b4d4-2ad55698cf3a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003737602s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tbfd9" [fe21bd3f-f589-44f5-b4d4-2ad55698cf3a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003635939s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-784505 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-784505 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-784505 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-784505 -n no-preload-784505
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-784505 -n no-preload-784505: exit status 2 (371.431347ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-784505 -n no-preload-784505
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-784505 -n no-preload-784505: exit status 2 (371.562704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-784505 --alsologtostderr -v=1
E0108 21:11:11.084805  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-784505 -n no-preload-784505
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-784505 -n no-preload-784505
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-604073 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:11:26.378736  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-604073 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m19.185614327s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-604073 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3070568b-9d46-4a41-bc05-46a77cbd0cd1] Pending
helpers_test.go:344: "busybox" [3070568b-9d46-4a41-bc05-46a77cbd0cd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3070568b-9d46-4a41-bc05-46a77cbd0cd1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004027769s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-604073 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-604073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-604073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.055847098s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-604073 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-604073 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-604073 --alsologtostderr -v=3: (12.104444633s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073: exit status 7 (94.460431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-604073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (614.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-604073 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:13:03.070626  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 21:13:27.243151  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:13:54.925502  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:14:11.026950  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 21:15:15.346209  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:15.351704  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:15.361943  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:15.382169  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:15.422490  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:15.502827  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:15.663710  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:15.984133  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:16.625056  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:17.906134  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:20.466677  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:25.587449  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:35.827608  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:15:56.307816  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:16:09.421580  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 21:16:26.378495  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 21:16:37.268902  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:17:59.189538  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:18:03.070922  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-604073 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m14.114192741s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (614.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qsccv" [9eb2d1dc-14db-4a00-bdb9-97f4352d9e00] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003989615s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qsccv" [9eb2d1dc-14db-4a00-bdb9-97f4352d9e00] Running
E0108 21:18:27.243636  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004275074s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-704697 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-704697 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-704697 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-704697 --alsologtostderr -v=1: (1.290095919s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-704697 -n embed-certs-704697
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-704697 -n embed-certs-704697: exit status 2 (465.260905ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-704697 -n embed-certs-704697
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-704697 -n embed-certs-704697: exit status 2 (411.373436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-704697 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-704697 -n embed-certs-704697
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-704697 -n embed-certs-704697
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-292672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 21:18:54.128838  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 21:19:11.026566  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-292672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (49.399410115s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-292672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-292672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.099066308s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-292672 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-292672 --alsologtostderr -v=3: (1.265668407s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-292672 -n newest-cni-292672
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-292672 -n newest-cni-292672: exit status 7 (93.994662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-292672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-292672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-292672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (30.32501438s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-292672 -n newest-cni-292672
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-292672 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-292672 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-292672 -n newest-cni-292672
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-292672 -n newest-cni-292672: exit status 2 (363.379685ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-292672 -n newest-cni-292672
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-292672 -n newest-cni-292672: exit status 2 (370.984118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-292672 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-292672 -n newest-cni-292672
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-292672 -n newest-cni-292672
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0108 21:20:15.345900  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
E0108 21:20:43.029999  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m17.006540633s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-595882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-595882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wwmb5" [fea5a560-dc07-4dea-be8f-5e7aef86ee74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:21:26.378131  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-wwmb5" [fea5a560-dc07-4dea-be8f-5e7aef86ee74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003365328s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-595882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0108 21:23:03.070083  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m18.229823115s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rvq6s" [18fdcf28-7385-412c-b3c4-1b03ec58bb74] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003608692s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-b6snt" [1e0af90c-c730-42b8-b38d-c93f50e5c6f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003790267s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rvq6s" [18fdcf28-7385-412c-b3c4-1b03ec58bb74] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004392366s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-604073 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-595882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-595882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8knvt" [2da3e9d9-c4c7-4696-808b-092effbf54ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8knvt" [2da3e9d9-c4c7-4696-808b-092effbf54ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004480614s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-604073 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-604073 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-604073 --alsologtostderr -v=1: (1.297188934s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073: exit status 2 (545.032384ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073: exit status 2 (528.856876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-604073 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-604073 --alsologtostderr -v=1: (1.042239371s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073
E0108 21:23:27.243236  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-604073 -n default-k8s-diff-port-604073
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.49s)
E0108 21:27:56.127529  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/default-k8s-diff-port-604073/client.crt: no such file or directory
E0108 21:28:03.070292  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 21:28:15.479235  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:15.484600  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:15.494826  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:15.515648  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:15.555946  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:15.636314  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:15.796833  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:16.117703  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:16.608310  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/default-k8s-diff-port-604073/client.crt: no such file or directory
E0108 21:28:16.758581  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:18.039534  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:20.600153  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:25.720838  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:27.243502  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
E0108 21:28:35.961937  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m30.094701048s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-595882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0108 21:24:11.027422  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/ingress-addon-legacy-105176/client.crt: no such file or directory
E0108 21:24:26.122330  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/addons-888287/client.crt: no such file or directory
E0108 21:24:50.285739  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/old-k8s-version-482542/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m18.484306036s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2p57k" [bde3ced7-4ef2-4b6a-829a-6fd71746804b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.009281292s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-595882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-595882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xd6gk" [3bd0453a-d64c-4cea-9f4b-6abbaef5d1cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xd6gk" [3bd0453a-d64c-4cea-9f4b-6abbaef5d1cc] Running
E0108 21:25:15.346064  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/no-preload-784505/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004269541s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-595882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-595882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-595882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7n8c5" [4dd82581-fb5b-4e85-b4b5-13a23f5853f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7n8c5" [4dd82581-fb5b-4e85-b4b5-13a23f5853f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003465633s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-595882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.261031599s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0108 21:26:23.765210  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:23.770443  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:23.780645  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:23.800852  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:23.841094  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:23.921654  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:24.081975  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:24.402188  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:25.042962  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:26.323941  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:26.378259  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/functional-735851/client.crt: no such file or directory
E0108 21:26:28.884126  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:34.004269  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
E0108 21:26:44.245012  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/auto-595882/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.420122255s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-595882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-595882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7s6dr" [d2647370-c4c5-4ac3-8d9d-06dc558983e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7s6dr" [d2647370-c4c5-4ac3-8d9d-06dc558983e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00425684s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-595882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qjtgm" [521f904b-3599-4dab-8e48-d6386560f7f3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004728048s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-595882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-595882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6hnrn" [4e395071-3c94-4691-bee9-b419f8b5304e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6hnrn" [4e395071-3c94-4691-bee9-b419f8b5304e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.008038665s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-595882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m28.738464369s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-595882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-595882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-595882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nwbng" [9a066aac-4076-4465-8f04-cc98621c9782] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:28:56.442659  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/kindnet-595882/client.crt: no such file or directory
E0108 21:28:57.569096  638732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-633350/.minikube/profiles/default-k8s-diff-port-604073/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nwbng" [9a066aac-4076-4465-8f04-cc98621c9782] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.006446941s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-595882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-595882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (32/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.66s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-824222 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-824222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-824222
--- SKIP: TestDownloadOnlyKic (0.66s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-699457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-699457
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-595882 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-595882" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-595882

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-595882"

                                                
                                                
----------------------- debugLogs end: kubenet-595882 [took: 5.445749163s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-595882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-595882
--- SKIP: TestNetworkPlugins/group/kubenet (5.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-595882 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-595882" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-595882

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-595882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-595882"

                                                
                                                
----------------------- debugLogs end: cilium-595882 [took: 6.182294159s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-595882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-595882
--- SKIP: TestNetworkPlugins/group/cilium (6.43s)

                                                
                                    
Copied to clipboard