Test Report: Docker_Linux_crio 17545

                    
                      52a7e2b524003d342b18395ad47f8bdcf4462a42:2023-11-03:31732
                    
                

Test fail (6/308)

Order failed test Duration
28 TestAddons/parallel/Ingress 155.81
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.89
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 178.61
209 TestMultiNode/serial/PingHostFrom2Pods 3.08
230 TestRunningBinaryUpgrade 62
238 TestStoppedBinaryUpgrade/Upgrade 91.31
x
+
TestAddons/parallel/Ingress (155.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-643880 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context addons-643880 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (2.298549134s)
addons_test.go:231: (dbg) Run:  kubectl --context addons-643880 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:231: (dbg) Done: kubectl --context addons-643880 replace --force -f testdata/nginx-ingress-v1.yaml: (1.018548254s)
addons_test.go:244: (dbg) Run:  kubectl --context addons-643880 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [76f66131-ed92-424c-b1ab-f13894ffe5a5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [76f66131-ed92-424c-b1ab-f13894ffe5a5] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.010695735s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-643880 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.474927206s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-643880 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-643880 addons disable ingress-dns --alsologtostderr -v=1: (1.147806565s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-643880 addons disable ingress --alsologtostderr -v=1: (7.590767366s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-643880
helpers_test.go:235: (dbg) docker inspect addons-643880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4c5cae6311a95ca11574f6de7dfe1ad7ff87ed607511443d33a6d2afd8e712f0",
	        "Created": "2023-11-03T20:29:41.554136305Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-03T20:29:41.868130634Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:efd86a3765897881549ab05896b96b2b4ff17749f0a64fb6c355478ceebc8b47",
	        "ResolvConfPath": "/var/lib/docker/containers/4c5cae6311a95ca11574f6de7dfe1ad7ff87ed607511443d33a6d2afd8e712f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4c5cae6311a95ca11574f6de7dfe1ad7ff87ed607511443d33a6d2afd8e712f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/4c5cae6311a95ca11574f6de7dfe1ad7ff87ed607511443d33a6d2afd8e712f0/hosts",
	        "LogPath": "/var/lib/docker/containers/4c5cae6311a95ca11574f6de7dfe1ad7ff87ed607511443d33a6d2afd8e712f0/4c5cae6311a95ca11574f6de7dfe1ad7ff87ed607511443d33a6d2afd8e712f0-json.log",
	        "Name": "/addons-643880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-643880:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-643880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ac01ca99921acb2036285d3e5f90ba699bbe9d825af1a1bc26c889265dc8c2e1-init/diff:/var/lib/docker/overlay2/10f966e66ad11ebf0563dbe6bde99d657b975224ac619c4daa8db5a19a2b3420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac01ca99921acb2036285d3e5f90ba699bbe9d825af1a1bc26c889265dc8c2e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac01ca99921acb2036285d3e5f90ba699bbe9d825af1a1bc26c889265dc8c2e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac01ca99921acb2036285d3e5f90ba699bbe9d825af1a1bc26c889265dc8c2e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-643880",
	                "Source": "/var/lib/docker/volumes/addons-643880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-643880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-643880",
	                "name.minikube.sigs.k8s.io": "addons-643880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f06c1f34dd4961ad4891a1091fccd311305287e1c7b2df7dd1855030628d87b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6f06c1f34dd4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-643880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4c5cae6311a9",
	                        "addons-643880"
	                    ],
	                    "NetworkID": "fa573d5b3a0904197a6586541a2509ff4513b9e632b6bc5dfb6295a05d4dc651",
	                    "EndpointID": "ffaf4c1c4a1f4854d03d5a5154e65e4031cbd903b989bc53f8a68b414997ca83",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-643880 -n addons-643880
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-643880 logs -n 25: (1.129005694s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-798930                                                                     | download-only-798930   | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC | 03 Nov 23 20:29 UTC |
	| delete  | -p download-only-798930                                                                     | download-only-798930   | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC | 03 Nov 23 20:29 UTC |
	| start   | --download-only -p                                                                          | download-docker-639246 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC |                     |
	|         | download-docker-639246                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p download-docker-639246                                                                   | download-docker-639246 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC | 03 Nov 23 20:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-580639   | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC |                     |
	|         | binary-mirror-580639                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:33755                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-580639                                                                     | binary-mirror-580639   | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC | 03 Nov 23 20:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC |                     |
	|         | addons-643880                                                                               |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC |                     |
	|         | addons-643880                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-643880 --wait=true                                                                | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC | 03 Nov 23 20:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:31 UTC | 03 Nov 23 20:31 UTC |
	|         | addons-643880                                                                               |                        |         |                |                     |                     |
	| addons  | addons-643880 addons                                                                        | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:31 UTC | 03 Nov 23 20:31 UTC |
	|         | disable metrics-server                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| ip      | addons-643880 ip                                                                            | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:31 UTC | 03 Nov 23 20:31 UTC |
	| addons  | addons-643880 addons disable                                                                | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:31 UTC | 03 Nov 23 20:31 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-643880 addons disable                                                                | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:31 UTC | 03 Nov 23 20:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:31 UTC | 03 Nov 23 20:31 UTC |
	|         | -p addons-643880                                                                            |                        |         |                |                     |                     |
	| ssh     | addons-643880 ssh curl -s                                                                   | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |                |                     |                     |
	| ssh     | addons-643880 ssh cat                                                                       | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:31 UTC | 03 Nov 23 20:31 UTC |
	|         | /opt/local-path-provisioner/pvc-e599eada-6185-4b3f-9f52-d42b11fb9454_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | addons-643880 addons disable                                                                | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:32 UTC | 03 Nov 23 20:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:32 UTC | 03 Nov 23 20:32 UTC |
	|         | -p addons-643880                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:32 UTC | 03 Nov 23 20:32 UTC |
	|         | addons-643880                                                                               |                        |         |                |                     |                     |
	| addons  | addons-643880 addons                                                                        | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:33 UTC | 03 Nov 23 20:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-643880 addons                                                                        | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:33 UTC | 03 Nov 23 20:33 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| ip      | addons-643880 ip                                                                            | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:34 UTC | 03 Nov 23 20:34 UTC |
	| addons  | addons-643880 addons disable                                                                | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:34 UTC | 03 Nov 23 20:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-643880 addons disable                                                                | addons-643880          | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:34 UTC | 03 Nov 23 20:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/03 20:29:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1103 20:29:17.232698   12790 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:29:17.232798   12790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:29:17.232806   12790 out.go:309] Setting ErrFile to fd 2...
	I1103 20:29:17.232811   12790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:29:17.232981   12790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 20:29:17.233516   12790 out.go:303] Setting JSON to false
	I1103 20:29:17.234278   12790 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":708,"bootTime":1699042650,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 20:29:17.234330   12790 start.go:138] virtualization: kvm guest
	I1103 20:29:17.236499   12790 out.go:177] * [addons-643880] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 20:29:17.238125   12790 out.go:177]   - MINIKUBE_LOCATION=17545
	I1103 20:29:17.239643   12790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 20:29:17.238140   12790 notify.go:220] Checking for updates...
	I1103 20:29:17.241360   12790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:29:17.243058   12790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 20:29:17.244496   12790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1103 20:29:17.246026   12790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1103 20:29:17.247510   12790 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 20:29:17.267630   12790 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 20:29:17.267721   12790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:29:17.313084   12790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-11-03 20:29:17.305206763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:29:17.313180   12790 docker.go:295] overlay module found
	I1103 20:29:17.315313   12790 out.go:177] * Using the docker driver based on user configuration
	I1103 20:29:17.316773   12790 start.go:298] selected driver: docker
	I1103 20:29:17.316790   12790 start.go:902] validating driver "docker" against <nil>
	I1103 20:29:17.316800   12790 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1103 20:29:17.317601   12790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:29:17.373898   12790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-11-03 20:29:17.366487662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:29:17.374040   12790 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1103 20:29:17.374264   12790 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1103 20:29:17.376020   12790 out.go:177] * Using Docker driver with root privileges
	I1103 20:29:17.377559   12790 cni.go:84] Creating CNI manager for ""
	I1103 20:29:17.377579   12790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1103 20:29:17.377591   12790 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1103 20:29:17.377603   12790 start_flags.go:323] config:
	{Name:addons-643880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-643880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:29:17.379182   12790 out.go:177] * Starting control plane node addons-643880 in cluster addons-643880
	I1103 20:29:17.380410   12790 cache.go:121] Beginning downloading kic base image for docker with crio
	I1103 20:29:17.381773   12790 out.go:177] * Pulling base image ...
	I1103 20:29:17.383046   12790 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:29:17.383074   12790 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon
	I1103 20:29:17.383086   12790 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1103 20:29:17.383094   12790 cache.go:56] Caching tarball of preloaded images
	I1103 20:29:17.383158   12790 preload.go:174] Found /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1103 20:29:17.383168   12790 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1103 20:29:17.383499   12790 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/config.json ...
	I1103 20:29:17.383518   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/config.json: {Name:mk955ada5222643dc572a2c1262600adfe696aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:17.397122   12790 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 to local cache
	I1103 20:29:17.397231   12790 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local cache directory
	I1103 20:29:17.397246   12790 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local cache directory, skipping pull
	I1103 20:29:17.397250   12790 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 exists in cache, skipping pull
	I1103 20:29:17.397257   12790 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 as a tarball
	I1103 20:29:17.397264   12790 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 from local cache
	I1103 20:29:28.424226   12790 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 from cached tarball
	I1103 20:29:28.424259   12790 cache.go:194] Successfully downloaded all kic artifacts
	I1103 20:29:28.424294   12790 start.go:365] acquiring machines lock for addons-643880: {Name:mkce678846378d0e2c0723681011980c7ebd0c8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:29:28.424381   12790 start.go:369] acquired machines lock for "addons-643880" in 61.791µs
	I1103 20:29:28.424401   12790 start.go:93] Provisioning new machine with config: &{Name:addons-643880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-643880 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1103 20:29:28.424516   12790 start.go:125] createHost starting for "" (driver="docker")
	I1103 20:29:28.426373   12790 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1103 20:29:28.426602   12790 start.go:159] libmachine.API.Create for "addons-643880" (driver="docker")
	I1103 20:29:28.426630   12790 client.go:168] LocalClient.Create starting
	I1103 20:29:28.426720   12790 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem
	I1103 20:29:28.585826   12790 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem
	I1103 20:29:28.705401   12790 cli_runner.go:164] Run: docker network inspect addons-643880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1103 20:29:28.720351   12790 cli_runner.go:211] docker network inspect addons-643880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1103 20:29:28.720400   12790 network_create.go:281] running [docker network inspect addons-643880] to gather additional debugging logs...
	I1103 20:29:28.720445   12790 cli_runner.go:164] Run: docker network inspect addons-643880
	W1103 20:29:28.734538   12790 cli_runner.go:211] docker network inspect addons-643880 returned with exit code 1
	I1103 20:29:28.734562   12790 network_create.go:284] error running [docker network inspect addons-643880]: docker network inspect addons-643880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-643880 not found
	I1103 20:29:28.734572   12790 network_create.go:286] output of [docker network inspect addons-643880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-643880 not found
	
	** /stderr **
	I1103 20:29:28.734671   12790 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1103 20:29:28.750047   12790 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002caac20}
	I1103 20:29:28.750092   12790 network_create.go:124] attempt to create docker network addons-643880 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1103 20:29:28.750132   12790 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-643880 addons-643880
	I1103 20:29:28.800820   12790 network_create.go:108] docker network addons-643880 192.168.49.0/24 created
	I1103 20:29:28.800848   12790 kic.go:121] calculated static IP "192.168.49.2" for the "addons-643880" container
	I1103 20:29:28.800904   12790 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1103 20:29:28.814700   12790 cli_runner.go:164] Run: docker volume create addons-643880 --label name.minikube.sigs.k8s.io=addons-643880 --label created_by.minikube.sigs.k8s.io=true
	I1103 20:29:28.830134   12790 oci.go:103] Successfully created a docker volume addons-643880
	I1103 20:29:28.830195   12790 cli_runner.go:164] Run: docker run --rm --name addons-643880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-643880 --entrypoint /usr/bin/test -v addons-643880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -d /var/lib
	I1103 20:29:36.076337   12790 cli_runner.go:217] Completed: docker run --rm --name addons-643880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-643880 --entrypoint /usr/bin/test -v addons-643880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -d /var/lib: (7.246113241s)
	I1103 20:29:36.076361   12790 oci.go:107] Successfully prepared a docker volume addons-643880
	I1103 20:29:36.076398   12790 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:29:36.076429   12790 kic.go:194] Starting extracting preloaded images to volume ...
	I1103 20:29:36.076489   12790 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-643880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -I lz4 -xf /preloaded.tar -C /extractDir
	I1103 20:29:41.485496   12790 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-643880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -I lz4 -xf /preloaded.tar -C /extractDir: (5.408973686s)
	I1103 20:29:41.485524   12790 kic.go:203] duration metric: took 5.409104 seconds to extract preloaded images to volume
	W1103 20:29:41.485656   12790 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1103 20:29:41.485742   12790 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1103 20:29:41.540537   12790 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-643880 --name addons-643880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-643880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-643880 --network addons-643880 --ip 192.168.49.2 --volume addons-643880:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89
	I1103 20:29:41.875371   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Running}}
	I1103 20:29:41.892680   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:29:41.908534   12790 cli_runner.go:164] Run: docker exec addons-643880 stat /var/lib/dpkg/alternatives/iptables
	I1103 20:29:41.964009   12790 oci.go:144] the created container "addons-643880" has a running status.
	I1103 20:29:41.964044   12790 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa...
	I1103 20:29:42.058403   12790 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1103 20:29:42.076284   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:29:42.089846   12790 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1103 20:29:42.089870   12790 kic_runner.go:114] Args: [docker exec --privileged addons-643880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1103 20:29:42.155472   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:29:42.171275   12790 machine.go:88] provisioning docker machine ...
	I1103 20:29:42.171320   12790 ubuntu.go:169] provisioning hostname "addons-643880"
	I1103 20:29:42.171361   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:29:42.191266   12790 main.go:141] libmachine: Using SSH client type: native
	I1103 20:29:42.191783   12790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1103 20:29:42.191808   12790 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-643880 && echo "addons-643880" | sudo tee /etc/hostname
	I1103 20:29:42.193190   12790 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36440->127.0.0.1:32772: read: connection reset by peer
	I1103 20:29:45.325592   12790 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-643880
	
	I1103 20:29:45.325671   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:29:45.341765   12790 main.go:141] libmachine: Using SSH client type: native
	I1103 20:29:45.342097   12790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1103 20:29:45.342114   12790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-643880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-643880/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-643880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1103 20:29:45.455654   12790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1103 20:29:45.455684   12790 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17545-5130/.minikube CaCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17545-5130/.minikube}
	I1103 20:29:45.455726   12790 ubuntu.go:177] setting up certificates
	I1103 20:29:45.455740   12790 provision.go:83] configureAuth start
	I1103 20:29:45.455783   12790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-643880
	I1103 20:29:45.470959   12790 provision.go:138] copyHostCerts
	I1103 20:29:45.471013   12790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem (1123 bytes)
	I1103 20:29:45.471120   12790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem (1679 bytes)
	I1103 20:29:45.471224   12790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem (1082 bytes)
	I1103 20:29:45.471281   12790 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem org=jenkins.addons-643880 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-643880]
	I1103 20:29:45.528208   12790 provision.go:172] copyRemoteCerts
	I1103 20:29:45.528250   12790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1103 20:29:45.528278   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:29:45.543192   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:29:45.628196   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1103 20:29:45.649016   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1103 20:29:45.667841   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1103 20:29:45.686658   12790 provision.go:86] duration metric: configureAuth took 230.90959ms
	I1103 20:29:45.686683   12790 ubuntu.go:193] setting minikube options for container-runtime
	I1103 20:29:45.686818   12790 config.go:182] Loaded profile config "addons-643880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:29:45.686907   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:29:45.703555   12790 main.go:141] libmachine: Using SSH client type: native
	I1103 20:29:45.703964   12790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1103 20:29:45.703988   12790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1103 20:29:45.901843   12790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1103 20:29:45.901874   12790 machine.go:91] provisioned docker machine in 3.730580637s
	I1103 20:29:45.901885   12790 client.go:171] LocalClient.Create took 17.475248707s
	I1103 20:29:45.901906   12790 start.go:167] duration metric: libmachine.API.Create for "addons-643880" took 17.475304639s
	I1103 20:29:45.901915   12790 start.go:300] post-start starting for "addons-643880" (driver="docker")
	I1103 20:29:45.901923   12790 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1103 20:29:45.901974   12790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1103 20:29:45.902009   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:29:45.917188   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:29:46.003888   12790 ssh_runner.go:195] Run: cat /etc/os-release
	I1103 20:29:46.006669   12790 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1103 20:29:46.006707   12790 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1103 20:29:46.006722   12790 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1103 20:29:46.006736   12790 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1103 20:29:46.006753   12790 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/addons for local assets ...
	I1103 20:29:46.006822   12790 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/files for local assets ...
	I1103 20:29:46.006858   12790 start.go:303] post-start completed in 104.936805ms
	I1103 20:29:46.007157   12790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-643880
	I1103 20:29:46.023692   12790 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/config.json ...
	I1103 20:29:46.023949   12790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1103 20:29:46.023998   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:29:46.040040   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:29:46.120365   12790 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1103 20:29:46.123998   12790 start.go:128] duration metric: createHost completed in 17.699471382s
	I1103 20:29:46.124020   12790 start.go:83] releasing machines lock for "addons-643880", held for 17.699628314s
	I1103 20:29:46.124079   12790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-643880
	I1103 20:29:46.138480   12790 ssh_runner.go:195] Run: cat /version.json
	I1103 20:29:46.138515   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:29:46.138594   12790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1103 20:29:46.138641   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:29:46.154728   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:29:46.154902   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:29:46.235561   12790 ssh_runner.go:195] Run: systemctl --version
	I1103 20:29:46.323961   12790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1103 20:29:46.457616   12790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1103 20:29:46.461608   12790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:29:46.478336   12790 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1103 20:29:46.478414   12790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:29:46.502669   12790 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1103 20:29:46.502688   12790 start.go:472] detecting cgroup driver to use...
	I1103 20:29:46.502714   12790 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1103 20:29:46.502765   12790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1103 20:29:46.515082   12790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1103 20:29:46.523919   12790 docker.go:203] disabling cri-docker service (if available) ...
	I1103 20:29:46.523957   12790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1103 20:29:46.534755   12790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1103 20:29:46.545914   12790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1103 20:29:46.623991   12790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1103 20:29:46.704779   12790 docker.go:219] disabling docker service ...
	I1103 20:29:46.704853   12790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1103 20:29:46.720561   12790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1103 20:29:46.730118   12790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1103 20:29:46.799280   12790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1103 20:29:46.876414   12790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1103 20:29:46.885962   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1103 20:29:46.899012   12790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1103 20:29:46.899067   12790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:29:46.906906   12790 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1103 20:29:46.906955   12790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:29:46.914857   12790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:29:46.922712   12790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:29:46.930497   12790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1103 20:29:46.938753   12790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1103 20:29:46.945375   12790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1103 20:29:46.952527   12790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1103 20:29:47.023801   12790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1103 20:29:47.132933   12790 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1103 20:29:47.133004   12790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1103 20:29:47.135979   12790 start.go:540] Will wait 60s for crictl version
	I1103 20:29:47.136020   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:29:47.138850   12790 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1103 20:29:47.169091   12790 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1103 20:29:47.169182   12790 ssh_runner.go:195] Run: crio --version
	I1103 20:29:47.200956   12790 ssh_runner.go:195] Run: crio --version
	I1103 20:29:47.232619   12790 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1103 20:29:47.234188   12790 cli_runner.go:164] Run: docker network inspect addons-643880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1103 20:29:47.249339   12790 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1103 20:29:47.252472   12790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1103 20:29:47.261784   12790 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:29:47.261843   12790 ssh_runner.go:195] Run: sudo crictl images --output json
	I1103 20:29:47.311363   12790 crio.go:496] all images are preloaded for cri-o runtime.
	I1103 20:29:47.311383   12790 crio.go:415] Images already preloaded, skipping extraction
	I1103 20:29:47.311424   12790 ssh_runner.go:195] Run: sudo crictl images --output json
	I1103 20:29:47.339823   12790 crio.go:496] all images are preloaded for cri-o runtime.
	I1103 20:29:47.339844   12790 cache_images.go:84] Images are preloaded, skipping loading
	I1103 20:29:47.339894   12790 ssh_runner.go:195] Run: crio config
	I1103 20:29:47.377447   12790 cni.go:84] Creating CNI manager for ""
	I1103 20:29:47.377473   12790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1103 20:29:47.377493   12790 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1103 20:29:47.377513   12790 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-643880 NodeName:addons-643880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1103 20:29:47.377635   12790 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-643880"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1103 20:29:47.377689   12790 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-643880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-643880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1103 20:29:47.377732   12790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1103 20:29:47.385074   12790 binaries.go:44] Found k8s binaries, skipping transfer
	I1103 20:29:47.385149   12790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1103 20:29:47.392086   12790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1103 20:29:47.406099   12790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1103 20:29:47.420048   12790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1103 20:29:47.434146   12790 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1103 20:29:47.437038   12790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1103 20:29:47.445658   12790 certs.go:56] Setting up /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880 for IP: 192.168.49.2
	I1103 20:29:47.445682   12790 certs.go:190] acquiring lock for shared ca certs: {Name:mk18b7761724bd0081d8ca2b791d44e447ae6553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.445791   12790 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key
	I1103 20:29:47.587352   12790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt ...
	I1103 20:29:47.587378   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt: {Name:mk1c70a9c0f45b3b5a4d21c074e15f0832963abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.587553   12790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key ...
	I1103 20:29:47.587567   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key: {Name:mk45b44f1c4905e7ead6d4017c686cedf8f69189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.587660   12790 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key
	I1103 20:29:47.670099   12790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt ...
	I1103 20:29:47.670123   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt: {Name:mk316ec7d1c49bca2d58bde1e6e78648511dd350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.670285   12790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key ...
	I1103 20:29:47.670298   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key: {Name:mkdc5531c3d5e859d26d4971693e937240301578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.670417   12790 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.key
	I1103 20:29:47.670432   12790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt with IP's: []
	I1103 20:29:47.819076   12790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt ...
	I1103 20:29:47.819102   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: {Name:mk75ee1a4fac52c30164c1d5075e8d853cbffe08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.819286   12790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.key ...
	I1103 20:29:47.819301   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.key: {Name:mka9457cce93f330836cd4f97010c4592839fd60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.819405   12790 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.key.dd3b5fb2
	I1103 20:29:47.819424   12790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1103 20:29:47.905819   12790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.crt.dd3b5fb2 ...
	I1103 20:29:47.905847   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.crt.dd3b5fb2: {Name:mkfd8cfdea5e16ef8861e1c54fc669ca8609330b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.905992   12790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.key.dd3b5fb2 ...
	I1103 20:29:47.906004   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.key.dd3b5fb2: {Name:mk077033d655abdf63877a41cdf51a3f8c0f1003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:47.906074   12790 certs.go:337] copying /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.crt
	I1103 20:29:47.906150   12790 certs.go:341] copying /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.key
	I1103 20:29:47.906196   12790 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/proxy-client.key
	I1103 20:29:47.906211   12790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/proxy-client.crt with IP's: []
	I1103 20:29:48.059399   12790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/proxy-client.crt ...
	I1103 20:29:48.059425   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/proxy-client.crt: {Name:mkdb304293482de9b292d339bb5cd1d9c59ef108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:48.059570   12790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/proxy-client.key ...
	I1103 20:29:48.059581   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/proxy-client.key: {Name:mkd6de4d7d11d41fdc6731c5b3c7d77d3b9bf618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:48.059738   12790 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem (1675 bytes)
	I1103 20:29:48.059775   12790 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem (1082 bytes)
	I1103 20:29:48.059800   12790 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem (1123 bytes)
	I1103 20:29:48.059828   12790 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem (1679 bytes)
	I1103 20:29:48.060337   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1103 20:29:48.080415   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1103 20:29:48.099808   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1103 20:29:48.118626   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1103 20:29:48.137460   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1103 20:29:48.156178   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1103 20:29:48.174526   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1103 20:29:48.193048   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1103 20:29:48.211277   12790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1103 20:29:48.230815   12790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1103 20:29:48.244811   12790 ssh_runner.go:195] Run: openssl version
	I1103 20:29:48.249232   12790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1103 20:29:48.256586   12790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:29:48.259329   12790 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  3 20:29 /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:29:48.259367   12790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:29:48.264915   12790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1103 20:29:48.272049   12790 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1103 20:29:48.274734   12790 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1103 20:29:48.274798   12790 kubeadm.go:404] StartCluster: {Name:addons-643880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-643880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:29:48.274865   12790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1103 20:29:48.274920   12790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1103 20:29:48.304667   12790 cri.go:89] found id: ""
	I1103 20:29:48.304723   12790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1103 20:29:48.311598   12790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1103 20:29:48.318642   12790 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1103 20:29:48.318690   12790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1103 20:29:48.325403   12790 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1103 20:29:48.325436   12790 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1103 20:29:48.364952   12790 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1103 20:29:48.365018   12790 kubeadm.go:322] [preflight] Running pre-flight checks
	I1103 20:29:48.396961   12790 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1103 20:29:48.397069   12790 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1103 20:29:48.397139   12790 kubeadm.go:322] OS: Linux
	I1103 20:29:48.397209   12790 kubeadm.go:322] CGROUPS_CPU: enabled
	I1103 20:29:48.397277   12790 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1103 20:29:48.397359   12790 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1103 20:29:48.397428   12790 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1103 20:29:48.397513   12790 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1103 20:29:48.397591   12790 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1103 20:29:48.397660   12790 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1103 20:29:48.397722   12790 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1103 20:29:48.397806   12790 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1103 20:29:48.455360   12790 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1103 20:29:48.455508   12790 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1103 20:29:48.455681   12790 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1103 20:29:48.637549   12790 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1103 20:29:48.640381   12790 out.go:204]   - Generating certificates and keys ...
	I1103 20:29:48.640534   12790 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1103 20:29:48.640639   12790 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1103 20:29:48.718490   12790 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1103 20:29:49.077652   12790 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1103 20:29:49.175549   12790 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1103 20:29:49.327474   12790 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1103 20:29:49.406206   12790 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1103 20:29:49.406315   12790 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-643880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1103 20:29:49.670899   12790 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1103 20:29:49.671075   12790 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-643880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1103 20:29:49.822327   12790 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1103 20:29:49.931496   12790 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1103 20:29:50.073308   12790 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1103 20:29:50.073423   12790 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1103 20:29:50.349472   12790 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1103 20:29:50.785904   12790 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1103 20:29:50.906596   12790 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1103 20:29:51.179812   12790 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1103 20:29:51.180245   12790 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1103 20:29:51.182276   12790 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1103 20:29:51.184382   12790 out.go:204]   - Booting up control plane ...
	I1103 20:29:51.184532   12790 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1103 20:29:51.184641   12790 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1103 20:29:51.184823   12790 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1103 20:29:51.192642   12790 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1103 20:29:51.193441   12790 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1103 20:29:51.193496   12790 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1103 20:29:51.264130   12790 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1103 20:29:56.266034   12790 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001974 seconds
	I1103 20:29:56.266160   12790 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1103 20:29:56.277188   12790 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1103 20:29:56.793924   12790 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1103 20:29:56.794200   12790 kubeadm.go:322] [mark-control-plane] Marking the node addons-643880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1103 20:29:57.303488   12790 kubeadm.go:322] [bootstrap-token] Using token: lsmpmz.9qp1tbriwnrpgcgw
	I1103 20:29:57.305075   12790 out.go:204]   - Configuring RBAC rules ...
	I1103 20:29:57.305206   12790 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1103 20:29:57.308731   12790 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1103 20:29:57.313992   12790 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1103 20:29:57.317536   12790 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1103 20:29:57.319919   12790 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1103 20:29:57.322395   12790 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1103 20:29:57.331234   12790 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1103 20:29:57.466241   12790 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1103 20:29:57.712555   12790 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1103 20:29:57.713452   12790 kubeadm.go:322] 
	I1103 20:29:57.713573   12790 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1103 20:29:57.713591   12790 kubeadm.go:322] 
	I1103 20:29:57.713691   12790 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1103 20:29:57.713723   12790 kubeadm.go:322] 
	I1103 20:29:57.713772   12790 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1103 20:29:57.713858   12790 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1103 20:29:57.713933   12790 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1103 20:29:57.713947   12790 kubeadm.go:322] 
	I1103 20:29:57.714022   12790 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1103 20:29:57.714033   12790 kubeadm.go:322] 
	I1103 20:29:57.714096   12790 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1103 20:29:57.714105   12790 kubeadm.go:322] 
	I1103 20:29:57.714176   12790 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1103 20:29:57.714287   12790 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1103 20:29:57.714394   12790 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1103 20:29:57.714403   12790 kubeadm.go:322] 
	I1103 20:29:57.714481   12790 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1103 20:29:57.714571   12790 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1103 20:29:57.714578   12790 kubeadm.go:322] 
	I1103 20:29:57.714644   12790 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lsmpmz.9qp1tbriwnrpgcgw \
	I1103 20:29:57.714727   12790 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df \
	I1103 20:29:57.714763   12790 kubeadm.go:322] 	--control-plane 
	I1103 20:29:57.714771   12790 kubeadm.go:322] 
	I1103 20:29:57.714855   12790 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1103 20:29:57.714862   12790 kubeadm.go:322] 
	I1103 20:29:57.714929   12790 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lsmpmz.9qp1tbriwnrpgcgw \
	I1103 20:29:57.715048   12790 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df 
	I1103 20:29:57.716120   12790 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1103 20:29:57.716255   12790 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1103 20:29:57.716271   12790 cni.go:84] Creating CNI manager for ""
	I1103 20:29:57.716292   12790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1103 20:29:57.718110   12790 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1103 20:29:57.719647   12790 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1103 20:29:57.723067   12790 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1103 20:29:57.723084   12790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1103 20:29:57.738177   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1103 20:29:58.382737   12790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1103 20:29:58.382811   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:29:58.382821   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=44765b58c8440feed3c9edc110a2d06dc722956e minikube.k8s.io/name=addons-643880 minikube.k8s.io/updated_at=2023_11_03T20_29_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:29:58.454549   12790 ops.go:34] apiserver oom_adj: -16
	I1103 20:29:58.454686   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:29:58.529999   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:29:59.090591   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:29:59.590171   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:00.090748   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:00.590776   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:01.090173   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:01.590278   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:02.089996   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:02.590664   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:03.090282   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:03.590882   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:04.090682   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:04.590166   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:05.090921   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:05.590655   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:06.090213   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:06.590555   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:07.090438   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:07.589991   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:08.090809   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:08.590049   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:09.090930   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:09.590910   12790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:30:09.654730   12790 kubeadm.go:1081] duration metric: took 11.271976582s to wait for elevateKubeSystemPrivileges.
	I1103 20:30:09.654761   12790 kubeadm.go:406] StartCluster complete in 21.37996861s
	I1103 20:30:09.654779   12790 settings.go:142] acquiring lock: {Name:mk78e85fd384b188b08ef0a94e618db15bb45e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:30:09.654877   12790 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:30:09.655309   12790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/kubeconfig: {Name:mk13adb0876366d94fd82a065912fb44eee0cd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:30:09.655502   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1103 20:30:09.655563   12790 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1103 20:30:09.655644   12790 addons.go:69] Setting volumesnapshots=true in profile "addons-643880"
	I1103 20:30:09.655664   12790 addons.go:231] Setting addon volumesnapshots=true in "addons-643880"
	I1103 20:30:09.655667   12790 addons.go:69] Setting ingress-dns=true in profile "addons-643880"
	I1103 20:30:09.655688   12790 addons.go:231] Setting addon ingress-dns=true in "addons-643880"
	I1103 20:30:09.655690   12790 config.go:182] Loaded profile config "addons-643880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:30:09.655706   12790 addons.go:69] Setting metrics-server=true in profile "addons-643880"
	I1103 20:30:09.655707   12790 addons.go:69] Setting inspektor-gadget=true in profile "addons-643880"
	I1103 20:30:09.655714   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.655721   12790 addons.go:231] Setting addon metrics-server=true in "addons-643880"
	I1103 20:30:09.655745   12790 addons.go:69] Setting ingress=true in profile "addons-643880"
	I1103 20:30:09.655762   12790 addons.go:69] Setting registry=true in profile "addons-643880"
	I1103 20:30:09.655765   12790 addons.go:231] Setting addon ingress=true in "addons-643880"
	I1103 20:30:09.655772   12790 addons.go:231] Setting addon registry=true in "addons-643880"
	I1103 20:30:09.655695   12790 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-643880"
	I1103 20:30:09.655751   12790 addons.go:69] Setting cloud-spanner=true in profile "addons-643880"
	I1103 20:30:09.655802   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.655814   12790 addons.go:231] Setting addon cloud-spanner=true in "addons-643880"
	I1103 20:30:09.655871   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.655732   12790 addons.go:69] Setting default-storageclass=true in profile "addons-643880"
	I1103 20:30:09.655899   12790 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-643880"
	I1103 20:30:09.655754   12790 addons.go:69] Setting helm-tiller=true in profile "addons-643880"
	I1103 20:30:09.655961   12790 addons.go:231] Setting addon helm-tiller=true in "addons-643880"
	I1103 20:30:09.655994   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.655771   12790 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-643880"
	I1103 20:30:09.655805   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.656083   12790 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-643880"
	I1103 20:30:09.656122   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.656179   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.656228   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.656288   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.656331   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.656402   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.656524   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.656603   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.655739   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.657219   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.655752   12790 addons.go:69] Setting storage-provisioner=true in profile "addons-643880"
	I1103 20:30:09.657606   12790 addons.go:231] Setting addon storage-provisioner=true in "addons-643880"
	I1103 20:30:09.657649   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.658076   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.655779   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.661883   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.655774   12790 addons.go:69] Setting gcp-auth=true in profile "addons-643880"
	I1103 20:30:09.662897   12790 mustload.go:65] Loading cluster: addons-643880
	I1103 20:30:09.663114   12790 config.go:182] Loaded profile config "addons-643880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:30:09.663361   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.655792   12790 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-643880"
	I1103 20:30:09.664608   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.665140   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.655733   12790 addons.go:231] Setting addon inspektor-gadget=true in "addons-643880"
	I1103 20:30:09.655786   12790 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-643880"
	I1103 20:30:09.672578   12790 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-643880"
	I1103 20:30:09.672943   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.674639   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.680547   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.687113   12790 addons.go:231] Setting addon default-storageclass=true in "addons-643880"
	I1103 20:30:09.687163   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.687666   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.692044   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1103 20:30:09.693480   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1103 20:30:09.698138   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1103 20:30:09.700388   12790 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1103 20:30:09.703447   12790 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1103 20:30:09.703466   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1103 20:30:09.703518   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.706594   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1103 20:30:09.708328   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1103 20:30:09.714329   12790 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1103 20:30:09.711294   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1103 20:30:09.711313   12790 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1103 20:30:09.717578   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1103 20:30:09.719104   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1103 20:30:09.724193   12790 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1103 20:30:09.719178   12790 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1103 20:30:09.724214   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1103 20:30:09.724230   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1103 20:30:09.724277   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.724282   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.719076   12790 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1103 20:30:09.724412   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1103 20:30:09.719185   12790 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1103 20:30:09.726015   12790 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1103 20:30:09.726033   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1103 20:30:09.726079   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.724462   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.718973   12790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1103 20:30:09.719193   12790 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1103 20:30:09.724673   12790 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-643880" context rescaled to 1 replicas
	I1103 20:30:09.727980   12790 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1103 20:30:09.728008   12790 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1103 20:30:09.729087   12790 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-643880"
	I1103 20:30:09.729434   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.729961   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:09.731603   12790 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1103 20:30:09.731621   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1103 20:30:09.731667   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.733493   12790 out.go:177] * Verifying Kubernetes components...
	I1103 20:30:09.729377   12790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1103 20:30:09.730325   12790 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1103 20:30:09.729338   12790 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1103 20:30:09.734791   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:09.734900   12790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:30:09.734912   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1103 20:30:09.737685   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.741793   12790 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1103 20:30:09.741802   12790 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1103 20:30:09.747805   12790 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1103 20:30:09.747831   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1103 20:30:09.747882   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.759358   12790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1103 20:30:09.743797   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.746680   12790 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1103 20:30:09.753089   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.756140   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.763083   12790 out.go:177]   - Using image docker.io/registry:2.8.3
	I1103 20:30:09.763350   12790 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1103 20:30:09.764876   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.766810   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1103 20:30:09.768490   12790 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1103 20:30:09.768503   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.768874   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1103 20:30:09.768928   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.766823   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1103 20:30:09.766836   12790 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1103 20:30:09.770481   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1103 20:30:09.770537   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.770460   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.775692   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.776960   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.778859   12790 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1103 20:30:09.778010   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.781063   12790 out.go:177]   - Using image docker.io/busybox:stable
	I1103 20:30:09.782789   12790 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1103 20:30:09.782806   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1103 20:30:09.782857   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:09.800515   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.808403   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.812091   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.817290   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.822438   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.822673   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1103 20:30:09.823527   12790 node_ready.go:35] waiting up to 6m0s for node "addons-643880" to be "Ready" ...
	I1103 20:30:09.828770   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:09.991702   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1103 20:30:09.997840   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1103 20:30:10.089791   12790 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1103 20:30:10.089818   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1103 20:30:10.191318   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1103 20:30:10.193324   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1103 20:30:10.199197   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1103 20:30:10.204143   12790 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1103 20:30:10.204167   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1103 20:30:10.292746   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1103 20:30:10.389819   12790 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1103 20:30:10.389898   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1103 20:30:10.394562   12790 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1103 20:30:10.394587   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1103 20:30:10.396810   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1103 20:30:10.403914   12790 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1103 20:30:10.403990   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1103 20:30:10.406396   12790 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1103 20:30:10.406422   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1103 20:30:10.490790   12790 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1103 20:30:10.490825   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1103 20:30:10.590631   12790 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1103 20:30:10.590671   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1103 20:30:10.596067   12790 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1103 20:30:10.596092   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1103 20:30:10.599535   12790 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1103 20:30:10.599556   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1103 20:30:10.698206   12790 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1103 20:30:10.698280   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1103 20:30:10.799784   12790 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1103 20:30:10.799862   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1103 20:30:10.807687   12790 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1103 20:30:10.807758   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1103 20:30:10.808522   12790 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1103 20:30:10.808545   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1103 20:30:10.909489   12790 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1103 20:30:10.909588   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1103 20:30:10.989886   12790 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1103 20:30:10.989966   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1103 20:30:10.994427   12790 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1103 20:30:10.994493   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1103 20:30:11.190211   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1103 20:30:11.191822   12790 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1103 20:30:11.191886   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1103 20:30:11.288986   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1103 20:30:11.301591   12790 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1103 20:30:11.301623   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1103 20:30:11.399719   12790 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1103 20:30:11.399749   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1103 20:30:11.407493   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1103 20:30:11.596207   12790 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1103 20:30:11.596237   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1103 20:30:11.806483   12790 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.983782305s)
	I1103 20:30:11.806608   12790 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1103 20:30:11.812127   12790 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1103 20:30:11.812154   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1103 20:30:11.907025   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1103 20:30:11.911754   12790 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1103 20:30:11.911780   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1103 20:30:11.999777   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:12.289252   12790 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1103 20:30:12.289354   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1103 20:30:12.301034   12790 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1103 20:30:12.301060   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1103 20:30:12.604733   12790 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1103 20:30:12.604758   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1103 20:30:12.706424   12790 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1103 20:30:12.706456   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1103 20:30:12.801874   12790 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1103 20:30:12.801906   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1103 20:30:12.907469   12790 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1103 20:30:12.907559   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1103 20:30:13.091587   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1103 20:30:13.303919   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1103 20:30:14.001073   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:14.704882   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.713136571s)
	I1103 20:30:14.705017   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.707147758s)
	I1103 20:30:14.705116   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.51371249s)
	I1103 20:30:15.923290   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.729904565s)
	I1103 20:30:15.923330   12790 addons.go:467] Verifying addon ingress=true in "addons-643880"
	I1103 20:30:15.923362   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.724131568s)
	I1103 20:30:15.925766   12790 out.go:177] * Verifying ingress addon...
	I1103 20:30:15.923452   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.630673235s)
	I1103 20:30:15.923477   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.526618183s)
	I1103 20:30:15.923503   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.733194733s)
	I1103 20:30:15.923529   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.634515278s)
	I1103 20:30:15.923610   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.516088541s)
	I1103 20:30:15.923694   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.016632365s)
	I1103 20:30:15.923767   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.832145068s)
	I1103 20:30:15.927166   12790 addons.go:467] Verifying addon registry=true in "addons-643880"
	W1103 20:30:15.927181   12790 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1103 20:30:15.927221   12790 retry.go:31] will retry after 137.381059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1103 20:30:15.927237   12790 addons.go:467] Verifying addon metrics-server=true in "addons-643880"
	I1103 20:30:15.929613   12790 out.go:177] * Verifying registry addon...
	I1103 20:30:15.928003   12790 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1103 20:30:15.931928   12790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1103 20:30:15.933307   12790 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1103 20:30:15.934659   12790 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1103 20:30:15.934680   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:15.934703   12790 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1103 20:30:15.934717   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:15.990131   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:15.990354   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:16.065758   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1103 20:30:16.400726   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:16.494045   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:16.494344   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:16.547935   12790 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1103 20:30:16.548003   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:16.565789   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:16.809802   12790 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1103 20:30:16.817085   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.51306154s)
	I1103 20:30:16.817123   12790 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-643880"
	I1103 20:30:16.819796   12790 out.go:177] * Verifying csi-hostpath-driver addon...
	I1103 20:30:16.822065   12790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1103 20:30:16.825394   12790 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1103 20:30:16.825411   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:16.828140   12790 addons.go:231] Setting addon gcp-auth=true in "addons-643880"
	I1103 20:30:16.828183   12790 host.go:66] Checking if "addons-643880" exists ...
	I1103 20:30:16.828333   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:16.828538   12790 cli_runner.go:164] Run: docker container inspect addons-643880 --format={{.State.Status}}
	I1103 20:30:16.843575   12790 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1103 20:30:16.843624   12790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-643880
	I1103 20:30:16.858947   12790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/addons-643880/id_rsa Username:docker}
	I1103 20:30:16.993932   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:16.994114   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:17.199555   12790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.133750674s)
	I1103 20:30:17.201333   12790 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1103 20:30:17.202965   12790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1103 20:30:17.204316   12790 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1103 20:30:17.204333   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1103 20:30:17.220337   12790 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1103 20:30:17.220355   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1103 20:30:17.235938   12790 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1103 20:30:17.235953   12790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1103 20:30:17.250569   12790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1103 20:30:17.333163   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:17.495510   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:17.496510   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:17.690836   12790 addons.go:467] Verifying addon gcp-auth=true in "addons-643880"
	I1103 20:30:17.692582   12790 out.go:177] * Verifying gcp-auth addon...
	I1103 20:30:17.695540   12790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1103 20:30:17.698278   12790 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1103 20:30:17.698295   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:17.701747   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:17.892936   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:17.995142   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:17.996457   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:18.206239   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:18.391501   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:18.402227   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:18.494898   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:18.495147   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:18.704907   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:18.892692   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:18.994895   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:18.995207   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:19.205158   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:19.389774   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:19.494965   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:19.495175   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:19.705692   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:19.892246   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:19.994667   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:19.994826   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:20.204756   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:20.332496   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:20.494109   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:20.494308   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:20.705007   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:20.832873   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:20.901052   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:20.994407   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:20.995067   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:21.205404   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:21.333416   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:21.494706   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:21.495339   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:21.705428   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:21.832972   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:21.995351   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:21.995555   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:22.205146   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:22.332921   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:22.496167   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:22.496553   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:22.704835   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:22.832229   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:22.903703   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:22.994050   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:22.994191   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:23.204984   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:23.333059   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:23.495000   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:23.495039   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:23.704842   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:23.832194   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:23.993860   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:23.993883   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:24.204542   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:24.333262   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:24.493895   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:24.494033   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:24.704756   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:24.832366   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:24.994123   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:24.994281   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:25.204863   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:25.332240   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:25.401533   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:25.493810   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:25.494012   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:25.704791   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:25.831953   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:25.994449   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:25.994720   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:26.205206   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:26.332621   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:26.494321   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:26.494484   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:26.705282   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:26.832790   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:26.994583   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:26.994776   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:27.205457   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:27.332064   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:27.493651   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:27.493880   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:27.704958   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:27.832489   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:27.901994   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:27.994130   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:27.994569   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:28.204759   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:28.332338   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:28.493591   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:28.493860   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:28.705572   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:28.833326   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:28.993784   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:28.993951   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:29.205458   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:29.332799   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:29.494339   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:29.494493   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:29.705405   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:29.832841   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:29.995765   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:29.995955   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:30.205407   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:30.332733   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:30.401014   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:30.494216   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:30.494511   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:30.705038   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:30.832558   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:30.993769   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:30.993972   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:31.205346   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:31.332631   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:31.494007   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:31.494232   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:31.704710   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:31.832127   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:31.994161   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:31.994540   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:32.204667   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:32.332067   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:32.401369   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:32.493471   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:32.494007   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:32.705346   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:32.832791   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:32.994446   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:32.994692   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:33.205295   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:33.332374   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:33.493666   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:33.493868   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:33.705570   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:33.831847   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:33.994340   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:33.994624   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:34.205120   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:34.332390   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:34.401830   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:34.494101   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:34.494291   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:34.705184   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:34.832540   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:34.993975   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:34.994211   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:35.204723   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:35.331704   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:35.494378   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:35.494624   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:35.705053   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:35.832116   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:35.994412   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:35.994627   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:36.205105   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:36.332272   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:36.493889   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:36.494046   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:36.706300   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:36.832674   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:36.900993   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:36.994564   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:36.994568   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:37.205397   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:37.331738   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:37.494364   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:37.494502   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:37.705185   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:37.832910   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:37.994614   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:37.994843   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:38.205052   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:38.332648   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:38.493709   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:38.494018   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:38.705246   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:38.832680   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:38.902081   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:38.994400   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:38.994596   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:39.204685   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:39.331973   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:39.497486   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:39.497782   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:39.705403   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:39.832299   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:39.994115   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:39.994321   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:40.205045   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:40.332294   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:40.493609   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:40.493899   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:40.705337   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:40.832666   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:40.994289   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:40.994525   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:41.204650   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:41.331797   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:41.401294   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:41.494206   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:41.494525   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:41.705391   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:41.832730   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:41.993961   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:41.994254   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:42.204595   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:42.332259   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:42.493453   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:42.493619   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:42.705456   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:42.832566   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:42.993614   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:42.993845   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:43.205166   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:43.332511   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:43.401737   12790 node_ready.go:58] node "addons-643880" has status "Ready":"False"
	I1103 20:30:43.493771   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:43.493969   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:43.704325   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:43.832509   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:43.994030   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:43.994179   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:44.204815   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:44.332121   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:44.493516   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:44.493877   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:44.705285   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:44.835012   12790 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1103 20:30:44.835037   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:44.901459   12790 node_ready.go:49] node "addons-643880" has status "Ready":"True"
	I1103 20:30:44.901488   12790 node_ready.go:38] duration metric: took 35.077925223s waiting for node "addons-643880" to be "Ready" ...
	I1103 20:30:44.901499   12790 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1103 20:30:44.911063   12790 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-s7nc7" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:44.995063   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:44.995882   12790 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1103 20:30:44.995989   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:45.204953   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:45.335854   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:45.498068   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:45.498067   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:45.704910   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:45.893880   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:45.998082   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:45.998772   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:46.205426   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:46.334414   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:46.494320   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:46.494387   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:46.705975   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:46.834222   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:46.928090   12790 pod_ready.go:102] pod "coredns-5dd5756b68-s7nc7" in "kube-system" namespace has status "Ready":"False"
	I1103 20:30:46.994654   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:46.994844   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:47.205611   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:47.333458   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:47.426707   12790 pod_ready.go:92] pod "coredns-5dd5756b68-s7nc7" in "kube-system" namespace has status "Ready":"True"
	I1103 20:30:47.426728   12790 pod_ready.go:81] duration metric: took 2.515636637s waiting for pod "coredns-5dd5756b68-s7nc7" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.426746   12790 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-643880" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.430641   12790 pod_ready.go:92] pod "etcd-addons-643880" in "kube-system" namespace has status "Ready":"True"
	I1103 20:30:47.430658   12790 pod_ready.go:81] duration metric: took 3.906511ms waiting for pod "etcd-addons-643880" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.430670   12790 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-643880" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.434424   12790 pod_ready.go:92] pod "kube-apiserver-addons-643880" in "kube-system" namespace has status "Ready":"True"
	I1103 20:30:47.434440   12790 pod_ready.go:81] duration metric: took 3.76315ms waiting for pod "kube-apiserver-addons-643880" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.434449   12790 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-643880" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.438167   12790 pod_ready.go:92] pod "kube-controller-manager-addons-643880" in "kube-system" namespace has status "Ready":"True"
	I1103 20:30:47.438184   12790 pod_ready.go:81] duration metric: took 3.729089ms waiting for pod "kube-controller-manager-addons-643880" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.438193   12790 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-52t4q" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.494932   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:47.494966   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:47.701771   12790 pod_ready.go:92] pod "kube-proxy-52t4q" in "kube-system" namespace has status "Ready":"True"
	I1103 20:30:47.701791   12790 pod_ready.go:81] duration metric: took 263.592237ms waiting for pod "kube-proxy-52t4q" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.701800   12790 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-643880" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:47.704318   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:47.836291   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:47.994938   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:47.995016   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:48.102528   12790 pod_ready.go:92] pod "kube-scheduler-addons-643880" in "kube-system" namespace has status "Ready":"True"
	I1103 20:30:48.102551   12790 pod_ready.go:81] duration metric: took 400.745141ms waiting for pod "kube-scheduler-addons-643880" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:48.102560   12790 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-n4gbx" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:48.205518   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:48.333242   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:48.494420   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:48.494513   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:48.704231   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:48.833459   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:48.995092   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:48.995265   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:49.204382   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:49.333243   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:49.493536   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:49.494249   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:49.705189   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:49.891090   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:49.995253   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:49.995737   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:50.205148   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:50.333667   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:50.408066   12790 pod_ready.go:92] pod "metrics-server-7c66d45ddc-n4gbx" in "kube-system" namespace has status "Ready":"True"
	I1103 20:30:50.408163   12790 pod_ready.go:81] duration metric: took 2.305588471s waiting for pod "metrics-server-7c66d45ddc-n4gbx" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:50.408193   12790 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace to be "Ready" ...
	I1103 20:30:50.494734   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:50.494947   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:50.704983   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:50.833679   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:50.994882   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:50.995120   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:51.205446   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:51.333780   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:51.495057   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:51.495817   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:51.705754   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:51.894544   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:51.996287   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:51.996296   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:52.205686   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:52.333303   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:52.495388   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:52.495508   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:52.705529   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:52.835998   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:52.908442   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:30:52.995428   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:52.996159   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:53.205594   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:53.333883   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:53.494159   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:53.494316   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:53.705054   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:53.833431   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:54.003487   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:54.004197   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:54.204664   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:54.333623   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:54.494438   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:54.494827   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:54.705389   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:54.833675   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:54.993848   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:54.993990   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:55.205224   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:55.333294   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:55.407001   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:30:55.494809   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:55.494895   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:55.704995   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:55.833506   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:55.994614   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:55.994735   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:56.204660   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:56.335843   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:56.493918   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:56.494185   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:56.704727   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:56.892102   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:56.995883   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:56.996679   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:57.205912   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:57.392214   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:57.408179   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:30:57.495393   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:57.495696   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:57.705906   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:57.894354   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:57.994580   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:57.994770   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:58.204807   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:58.333790   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:58.495147   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:58.495301   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:58.705521   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:58.835119   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:58.995670   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:58.995682   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:59.205168   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:59.334430   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:59.408241   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:30:59.495055   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:30:59.495073   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:59.705309   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:30:59.834385   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:30:59.995397   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:30:59.995537   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:00.205175   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:00.334804   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:00.494190   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:00.494769   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:00.705067   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:00.833810   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:00.994623   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:00.994972   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:01.204744   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:01.348775   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:01.494062   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:01.494307   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:01.705152   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:01.833266   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:01.907566   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:31:01.994041   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:01.994231   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:02.205034   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:02.334906   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:02.494643   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:02.494820   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:02.704915   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:02.833077   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:02.994433   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:02.994497   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:03.205244   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:03.334061   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:03.494600   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:03.494731   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:03.705471   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:03.834285   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:03.995157   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:03.995237   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:04.204597   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:04.333673   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:04.407525   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:31:04.494270   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:04.494433   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:04.705273   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:04.834235   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:04.994364   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:04.994474   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:05.205044   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:05.334242   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:05.494304   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:05.494823   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:05.706811   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:05.834381   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:05.995266   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:05.995586   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:06.205222   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:06.333079   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:06.494293   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:06.494294   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:06.705042   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:06.833477   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:06.907618   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:31:06.994274   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:06.994482   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:07.204967   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:07.333714   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:07.495718   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:07.496053   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:07.705447   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:07.898939   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:07.998500   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:07.999452   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:08.213663   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:08.392328   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:08.496327   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:08.496824   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:08.706730   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:08.834501   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:08.908544   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:31:08.995328   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:08.996056   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:09.205311   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:09.394061   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:09.495467   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:09.495505   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:09.706197   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:09.834519   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:09.995215   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:09.995721   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:10.205616   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:10.333895   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:10.494956   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:10.495272   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:10.704810   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:10.833870   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:10.908709   12790 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"False"
	I1103 20:31:10.996898   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:10.996979   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:11.206153   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:11.334392   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:11.495079   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:11.495104   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:11.704868   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:11.833593   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:11.995262   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:11.995528   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:12.205254   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:12.334541   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:12.495684   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:12.495769   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:12.705419   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:12.835172   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:12.994267   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:12.994539   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:13.204789   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:13.333923   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:13.407200   12790 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace has status "Ready":"True"
	I1103 20:31:13.407219   12790 pod_ready.go:81] duration metric: took 22.998993179s waiting for pod "nvidia-device-plugin-daemonset-ss2kh" in "kube-system" namespace to be "Ready" ...
	I1103 20:31:13.407236   12790 pod_ready.go:38] duration metric: took 28.505723706s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1103 20:31:13.407248   12790 api_server.go:52] waiting for apiserver process to appear ...
	I1103 20:31:13.407269   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1103 20:31:13.407313   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1103 20:31:13.442732   12790 cri.go:89] found id: "770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e"
	I1103 20:31:13.442759   12790 cri.go:89] found id: ""
	I1103 20:31:13.442770   12790 logs.go:284] 1 containers: [770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e]
	I1103 20:31:13.442823   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:13.446047   12790 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1103 20:31:13.446109   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1103 20:31:13.495378   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:13.495690   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:13.504997   12790 cri.go:89] found id: "dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6"
	I1103 20:31:13.505018   12790 cri.go:89] found id: ""
	I1103 20:31:13.505027   12790 logs.go:284] 1 containers: [dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6]
	I1103 20:31:13.505075   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:13.508280   12790 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1103 20:31:13.508362   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1103 20:31:13.545056   12790 cri.go:89] found id: "ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08"
	I1103 20:31:13.545082   12790 cri.go:89] found id: ""
	I1103 20:31:13.545092   12790 logs.go:284] 1 containers: [ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08]
	I1103 20:31:13.545138   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:13.589221   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1103 20:31:13.589297   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1103 20:31:13.625377   12790 cri.go:89] found id: "b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049"
	I1103 20:31:13.625401   12790 cri.go:89] found id: ""
	I1103 20:31:13.625411   12790 logs.go:284] 1 containers: [b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049]
	I1103 20:31:13.625465   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:13.628688   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1103 20:31:13.628749   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1103 20:31:13.699306   12790 cri.go:89] found id: "4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9"
	I1103 20:31:13.699331   12790 cri.go:89] found id: ""
	I1103 20:31:13.699339   12790 logs.go:284] 1 containers: [4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9]
	I1103 20:31:13.699377   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:13.702716   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1103 20:31:13.702768   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1103 20:31:13.705363   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:13.736772   12790 cri.go:89] found id: "2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0"
	I1103 20:31:13.736797   12790 cri.go:89] found id: ""
	I1103 20:31:13.736807   12790 logs.go:284] 1 containers: [2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0]
	I1103 20:31:13.736852   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:13.740346   12790 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1103 20:31:13.740394   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1103 20:31:13.794434   12790 cri.go:89] found id: "8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6"
	I1103 20:31:13.794458   12790 cri.go:89] found id: ""
	I1103 20:31:13.794470   12790 logs.go:284] 1 containers: [8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6]
	I1103 20:31:13.794519   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:13.797652   12790 logs.go:123] Gathering logs for kube-proxy [4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9] ...
	I1103 20:31:13.797673   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9"
	I1103 20:31:13.833618   12790 logs.go:123] Gathering logs for kube-controller-manager [2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0] ...
	I1103 20:31:13.833649   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0"
	I1103 20:31:13.834038   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:13.925259   12790 logs.go:123] Gathering logs for CRI-O ...
	I1103 20:31:13.925288   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1103 20:31:13.994493   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:13.994653   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:13.999021   12790 logs.go:123] Gathering logs for kubelet ...
	I1103 20:31:13.999043   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1103 20:31:14.072169   12790 logs.go:123] Gathering logs for describe nodes ...
	I1103 20:31:14.072198   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1103 20:31:14.172754   12790 logs.go:123] Gathering logs for kube-scheduler [b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049] ...
	I1103 20:31:14.172784   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049"
	I1103 20:31:14.204963   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:14.230622   12790 logs.go:123] Gathering logs for coredns [ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08] ...
	I1103 20:31:14.230654   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08"
	I1103 20:31:14.262948   12790 logs.go:123] Gathering logs for kindnet [8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6] ...
	I1103 20:31:14.262978   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6"
	I1103 20:31:14.293391   12790 logs.go:123] Gathering logs for container status ...
	I1103 20:31:14.293415   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1103 20:31:14.329690   12790 logs.go:123] Gathering logs for dmesg ...
	I1103 20:31:14.329722   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1103 20:31:14.335048   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:14.341336   12790 logs.go:123] Gathering logs for kube-apiserver [770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e] ...
	I1103 20:31:14.341363   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e"
	I1103 20:31:14.393169   12790 logs.go:123] Gathering logs for etcd [dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6] ...
	I1103 20:31:14.393227   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6"
	I1103 20:31:14.494186   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:14.494407   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:14.705093   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:14.833748   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:14.995376   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:14.995426   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:15.206422   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:15.333527   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:15.494770   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:15.494790   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:15.706228   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:15.834088   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:15.995357   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:15.995637   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:16.205667   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:16.333433   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:16.495108   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:16.495405   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:16.705933   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:16.834539   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:16.955500   12790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1103 20:31:16.994835   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:16.994893   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:17.001747   12790 api_server.go:72] duration metric: took 1m7.271560659s to wait for apiserver process to appear ...
	I1103 20:31:17.001770   12790 api_server.go:88] waiting for apiserver healthz status ...
	I1103 20:31:17.001800   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1103 20:31:17.001855   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1103 20:31:17.040705   12790 cri.go:89] found id: "770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e"
	I1103 20:31:17.040732   12790 cri.go:89] found id: ""
	I1103 20:31:17.040741   12790 logs.go:284] 1 containers: [770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e]
	I1103 20:31:17.040791   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:17.043901   12790 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1103 20:31:17.043963   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1103 20:31:17.125315   12790 cri.go:89] found id: "dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6"
	I1103 20:31:17.125343   12790 cri.go:89] found id: ""
	I1103 20:31:17.125353   12790 logs.go:284] 1 containers: [dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6]
	I1103 20:31:17.125406   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:17.129083   12790 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1103 20:31:17.129138   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1103 20:31:17.209226   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:17.300591   12790 cri.go:89] found id: "ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08"
	I1103 20:31:17.300618   12790 cri.go:89] found id: ""
	I1103 20:31:17.300628   12790 logs.go:284] 1 containers: [ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08]
	I1103 20:31:17.300677   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:17.304499   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1103 20:31:17.304562   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1103 20:31:17.333128   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:17.396910   12790 cri.go:89] found id: "b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049"
	I1103 20:31:17.396938   12790 cri.go:89] found id: ""
	I1103 20:31:17.396946   12790 logs.go:284] 1 containers: [b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049]
	I1103 20:31:17.396995   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:17.400342   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1103 20:31:17.400399   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1103 20:31:17.436083   12790 cri.go:89] found id: "4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9"
	I1103 20:31:17.436102   12790 cri.go:89] found id: ""
	I1103 20:31:17.436109   12790 logs.go:284] 1 containers: [4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9]
	I1103 20:31:17.436149   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:17.439485   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1103 20:31:17.439570   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1103 20:31:17.495000   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:17.495225   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:17.518133   12790 cri.go:89] found id: "2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0"
	I1103 20:31:17.518151   12790 cri.go:89] found id: ""
	I1103 20:31:17.518161   12790 logs.go:284] 1 containers: [2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0]
	I1103 20:31:17.518208   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:17.521855   12790 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1103 20:31:17.521930   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1103 20:31:17.602573   12790 cri.go:89] found id: "8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6"
	I1103 20:31:17.602602   12790 cri.go:89] found id: ""
	I1103 20:31:17.602612   12790 logs.go:284] 1 containers: [8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6]
	I1103 20:31:17.602670   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:17.606549   12790 logs.go:123] Gathering logs for dmesg ...
	I1103 20:31:17.606586   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1103 20:31:17.619376   12790 logs.go:123] Gathering logs for etcd [dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6] ...
	I1103 20:31:17.619402   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6"
	I1103 20:31:17.705954   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:17.732683   12790 logs.go:123] Gathering logs for coredns [ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08] ...
	I1103 20:31:17.732716   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08"
	I1103 20:31:17.798736   12790 logs.go:123] Gathering logs for kube-proxy [4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9] ...
	I1103 20:31:17.798763   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9"
	I1103 20:31:17.833895   12790 logs.go:123] Gathering logs for kindnet [8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6] ...
	I1103 20:31:17.833918   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6"
	I1103 20:31:17.834245   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:17.904063   12790 logs.go:123] Gathering logs for CRI-O ...
	I1103 20:31:17.904087   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1103 20:31:17.981495   12790 logs.go:123] Gathering logs for container status ...
	I1103 20:31:17.981526   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1103 20:31:17.995046   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:17.996383   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:18.096392   12790 logs.go:123] Gathering logs for kubelet ...
	I1103 20:31:18.096490   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1103 20:31:18.185879   12790 logs.go:123] Gathering logs for describe nodes ...
	I1103 20:31:18.185907   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1103 20:31:18.207607   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:18.334188   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:18.407124   12790 logs.go:123] Gathering logs for kube-apiserver [770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e] ...
	I1103 20:31:18.407150   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e"
	I1103 20:31:18.458779   12790 logs.go:123] Gathering logs for kube-scheduler [b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049] ...
	I1103 20:31:18.458815   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049"
	I1103 20:31:18.495557   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:18.496338   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:18.532758   12790 logs.go:123] Gathering logs for kube-controller-manager [2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0] ...
	I1103 20:31:18.532791   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0"
	I1103 20:31:18.705162   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:18.834196   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:18.995551   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:18.995606   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:19.205794   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:19.333497   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:19.493906   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:19.493961   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:19.704696   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:19.834551   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:19.995593   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:19.995746   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1103 20:31:20.205813   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:20.333479   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:20.494900   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:20.495641   12790 kapi.go:107] duration metric: took 1m4.563710385s to wait for kubernetes.io/minikube-addons=registry ...
	I1103 20:31:20.704965   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:20.833953   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:20.994621   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:21.129028   12790 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1103 20:31:21.133980   12790 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1103 20:31:21.135079   12790 api_server.go:141] control plane version: v1.28.3
	I1103 20:31:21.135100   12790 api_server.go:131] duration metric: took 4.133323259s to wait for apiserver health ...
	I1103 20:31:21.135107   12790 system_pods.go:43] waiting for kube-system pods to appear ...
	I1103 20:31:21.135128   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1103 20:31:21.135180   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1103 20:31:21.204241   12790 cri.go:89] found id: "770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e"
	I1103 20:31:21.204263   12790 cri.go:89] found id: ""
	I1103 20:31:21.204272   12790 logs.go:284] 1 containers: [770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e]
	I1103 20:31:21.204327   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:21.205131   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:21.207820   12790 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1103 20:31:21.207883   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1103 20:31:21.291008   12790 cri.go:89] found id: "dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6"
	I1103 20:31:21.291036   12790 cri.go:89] found id: ""
	I1103 20:31:21.291046   12790 logs.go:284] 1 containers: [dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6]
	I1103 20:31:21.291087   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:21.294304   12790 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1103 20:31:21.294368   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1103 20:31:21.329313   12790 cri.go:89] found id: "ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08"
	I1103 20:31:21.329338   12790 cri.go:89] found id: ""
	I1103 20:31:21.329351   12790 logs.go:284] 1 containers: [ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08]
	I1103 20:31:21.329398   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:21.333107   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1103 20:31:21.333168   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1103 20:31:21.334573   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:21.404303   12790 cri.go:89] found id: "b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049"
	I1103 20:31:21.404328   12790 cri.go:89] found id: ""
	I1103 20:31:21.404336   12790 logs.go:284] 1 containers: [b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049]
	I1103 20:31:21.404379   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:21.407771   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1103 20:31:21.407846   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1103 20:31:21.440140   12790 cri.go:89] found id: "4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9"
	I1103 20:31:21.440165   12790 cri.go:89] found id: ""
	I1103 20:31:21.440176   12790 logs.go:284] 1 containers: [4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9]
	I1103 20:31:21.440224   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:21.443681   12790 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1103 20:31:21.443741   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1103 20:31:21.494911   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:21.517942   12790 cri.go:89] found id: "2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0"
	I1103 20:31:21.517963   12790 cri.go:89] found id: ""
	I1103 20:31:21.517971   12790 logs.go:284] 1 containers: [2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0]
	I1103 20:31:21.518026   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:21.521945   12790 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1103 20:31:21.522005   12790 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1103 20:31:21.605902   12790 cri.go:89] found id: "8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6"
	I1103 20:31:21.605930   12790 cri.go:89] found id: ""
	I1103 20:31:21.605940   12790 logs.go:284] 1 containers: [8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6]
	I1103 20:31:21.605992   12790 ssh_runner.go:195] Run: which crictl
	I1103 20:31:21.609290   12790 logs.go:123] Gathering logs for dmesg ...
	I1103 20:31:21.609319   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1103 20:31:21.622604   12790 logs.go:123] Gathering logs for describe nodes ...
	I1103 20:31:21.622637   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1103 20:31:21.706516   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:21.890437   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:21.922616   12790 logs.go:123] Gathering logs for kube-apiserver [770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e] ...
	I1103 20:31:21.922653   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e"
	I1103 20:31:21.994600   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:22.018408   12790 logs.go:123] Gathering logs for kube-scheduler [b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049] ...
	I1103 20:31:22.018436   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049"
	I1103 20:31:22.058339   12790 logs.go:123] Gathering logs for kube-controller-manager [2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0] ...
	I1103 20:31:22.058379   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0"
	I1103 20:31:22.150641   12790 logs.go:123] Gathering logs for kindnet [8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6] ...
	I1103 20:31:22.150670   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6"
	I1103 20:31:22.201692   12790 logs.go:123] Gathering logs for CRI-O ...
	I1103 20:31:22.201728   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1103 20:31:22.207568   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:22.281685   12790 logs.go:123] Gathering logs for kubelet ...
	I1103 20:31:22.281715   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1103 20:31:22.333926   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:22.379369   12790 logs.go:123] Gathering logs for container status ...
	I1103 20:31:22.379408   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1103 20:31:22.429633   12790 logs.go:123] Gathering logs for coredns [ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08] ...
	I1103 20:31:22.429662   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08"
	I1103 20:31:22.463237   12790 logs.go:123] Gathering logs for kube-proxy [4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9] ...
	I1103 20:31:22.463266   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9"
	I1103 20:31:22.495278   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:22.496564   12790 logs.go:123] Gathering logs for etcd [dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6] ...
	I1103 20:31:22.496585   12790 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6"
	I1103 20:31:22.705848   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:22.840007   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:22.995845   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:23.205889   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1103 20:31:23.405375   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:23.496542   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:23.705203   12790 kapi.go:107] duration metric: took 1m6.009662604s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1103 20:31:23.707275   12790 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-643880 cluster.
	I1103 20:31:23.709087   12790 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1103 20:31:23.711746   12790 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1103 20:31:23.894771   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:23.995048   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:24.393712   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:24.494590   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:24.833440   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:24.994096   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:25.062530   12790 system_pods.go:59] 19 kube-system pods found
	I1103 20:31:25.062564   12790 system_pods.go:61] "coredns-5dd5756b68-s7nc7" [6704ea0a-7c9e-41be-8377-71149d4bbacc] Running
	I1103 20:31:25.062571   12790 system_pods.go:61] "csi-hostpath-attacher-0" [9bd6fe0a-d350-4d2f-a2c1-2f890c8cf269] Running
	I1103 20:31:25.062578   12790 system_pods.go:61] "csi-hostpath-resizer-0" [3390ab30-6dfe-46f4-9f92-79a76e1d3fe2] Running
	I1103 20:31:25.062589   12790 system_pods.go:61] "csi-hostpathplugin-2pbhk" [6bc75c82-226e-41aa-85ab-eb2828e9c0ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1103 20:31:25.062597   12790 system_pods.go:61] "etcd-addons-643880" [848affe0-6a45-42d4-8dd6-ae4c28940a91] Running
	I1103 20:31:25.062607   12790 system_pods.go:61] "kindnet-mj9r2" [00bd0553-d177-45c5-b894-9151cbeed8b9] Running
	I1103 20:31:25.062621   12790 system_pods.go:61] "kube-apiserver-addons-643880" [9a53e833-d733-442b-ac54-fe2683fb3eba] Running
	I1103 20:31:25.062627   12790 system_pods.go:61] "kube-controller-manager-addons-643880" [3cfb9c8f-6515-4368-b565-327f6a45ed1a] Running
	I1103 20:31:25.062638   12790 system_pods.go:61] "kube-ingress-dns-minikube" [f792e6e0-5eb0-497f-804c-15f24d0fd4ad] Running
	I1103 20:31:25.062647   12790 system_pods.go:61] "kube-proxy-52t4q" [aa94d58a-aa01-4ea7-a963-948518416798] Running
	I1103 20:31:25.062656   12790 system_pods.go:61] "kube-scheduler-addons-643880" [9d7eea37-48bb-4e3f-bf5c-cd2f032bbb8f] Running
	I1103 20:31:25.062665   12790 system_pods.go:61] "metrics-server-7c66d45ddc-n4gbx" [c63f6ef8-4bcb-47d0-ad6a-5f786174932e] Running
	I1103 20:31:25.062675   12790 system_pods.go:61] "nvidia-device-plugin-daemonset-ss2kh" [1eb8f77f-3488-42c5-86e7-82bdacdc4a40] Running
	I1103 20:31:25.062682   12790 system_pods.go:61] "registry-g745q" [f1dfd4a5-9963-4985-98c6-e7427baa25ef] Running
	I1103 20:31:25.062691   12790 system_pods.go:61] "registry-proxy-4xwcw" [5a22de6a-dd81-41fc-a1a7-9bbdf76955e8] Running
	I1103 20:31:25.062700   12790 system_pods.go:61] "snapshot-controller-58dbcc7b99-rxmvx" [1e3aecbc-b112-4db4-b4bc-2dfb44c60c6b] Running
	I1103 20:31:25.062710   12790 system_pods.go:61] "snapshot-controller-58dbcc7b99-wkkf2" [2a45062c-a6cc-45c1-a902-25716ecd6788] Running
	I1103 20:31:25.062716   12790 system_pods.go:61] "storage-provisioner" [fdc6b393-d3f0-4c88-8f73-dad07c00f14b] Running
	I1103 20:31:25.062724   12790 system_pods.go:61] "tiller-deploy-7b677967b9-f4x9k" [927a73a8-f1f2-42ae-9cf5-29fd998a00ad] Running
	I1103 20:31:25.062734   12790 system_pods.go:74] duration metric: took 3.927621278s to wait for pod list to return data ...
	I1103 20:31:25.062746   12790 default_sa.go:34] waiting for default service account to be created ...
	I1103 20:31:25.064869   12790 default_sa.go:45] found service account: "default"
	I1103 20:31:25.064893   12790 default_sa.go:55] duration metric: took 2.139635ms for default service account to be created ...
	I1103 20:31:25.064901   12790 system_pods.go:116] waiting for k8s-apps to be running ...
	I1103 20:31:25.074492   12790 system_pods.go:86] 19 kube-system pods found
	I1103 20:31:25.074520   12790 system_pods.go:89] "coredns-5dd5756b68-s7nc7" [6704ea0a-7c9e-41be-8377-71149d4bbacc] Running
	I1103 20:31:25.074531   12790 system_pods.go:89] "csi-hostpath-attacher-0" [9bd6fe0a-d350-4d2f-a2c1-2f890c8cf269] Running
	I1103 20:31:25.074539   12790 system_pods.go:89] "csi-hostpath-resizer-0" [3390ab30-6dfe-46f4-9f92-79a76e1d3fe2] Running
	I1103 20:31:25.074556   12790 system_pods.go:89] "csi-hostpathplugin-2pbhk" [6bc75c82-226e-41aa-85ab-eb2828e9c0ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1103 20:31:25.074569   12790 system_pods.go:89] "etcd-addons-643880" [848affe0-6a45-42d4-8dd6-ae4c28940a91] Running
	I1103 20:31:25.074583   12790 system_pods.go:89] "kindnet-mj9r2" [00bd0553-d177-45c5-b894-9151cbeed8b9] Running
	I1103 20:31:25.074594   12790 system_pods.go:89] "kube-apiserver-addons-643880" [9a53e833-d733-442b-ac54-fe2683fb3eba] Running
	I1103 20:31:25.074606   12790 system_pods.go:89] "kube-controller-manager-addons-643880" [3cfb9c8f-6515-4368-b565-327f6a45ed1a] Running
	I1103 20:31:25.074619   12790 system_pods.go:89] "kube-ingress-dns-minikube" [f792e6e0-5eb0-497f-804c-15f24d0fd4ad] Running
	I1103 20:31:25.074629   12790 system_pods.go:89] "kube-proxy-52t4q" [aa94d58a-aa01-4ea7-a963-948518416798] Running
	I1103 20:31:25.074642   12790 system_pods.go:89] "kube-scheduler-addons-643880" [9d7eea37-48bb-4e3f-bf5c-cd2f032bbb8f] Running
	I1103 20:31:25.074653   12790 system_pods.go:89] "metrics-server-7c66d45ddc-n4gbx" [c63f6ef8-4bcb-47d0-ad6a-5f786174932e] Running
	I1103 20:31:25.074665   12790 system_pods.go:89] "nvidia-device-plugin-daemonset-ss2kh" [1eb8f77f-3488-42c5-86e7-82bdacdc4a40] Running
	I1103 20:31:25.074678   12790 system_pods.go:89] "registry-g745q" [f1dfd4a5-9963-4985-98c6-e7427baa25ef] Running
	I1103 20:31:25.074689   12790 system_pods.go:89] "registry-proxy-4xwcw" [5a22de6a-dd81-41fc-a1a7-9bbdf76955e8] Running
	I1103 20:31:25.074702   12790 system_pods.go:89] "snapshot-controller-58dbcc7b99-rxmvx" [1e3aecbc-b112-4db4-b4bc-2dfb44c60c6b] Running
	I1103 20:31:25.074714   12790 system_pods.go:89] "snapshot-controller-58dbcc7b99-wkkf2" [2a45062c-a6cc-45c1-a902-25716ecd6788] Running
	I1103 20:31:25.074725   12790 system_pods.go:89] "storage-provisioner" [fdc6b393-d3f0-4c88-8f73-dad07c00f14b] Running
	I1103 20:31:25.074734   12790 system_pods.go:89] "tiller-deploy-7b677967b9-f4x9k" [927a73a8-f1f2-42ae-9cf5-29fd998a00ad] Running
	I1103 20:31:25.074748   12790 system_pods.go:126] duration metric: took 9.840208ms to wait for k8s-apps to be running ...
	I1103 20:31:25.074760   12790 system_svc.go:44] waiting for kubelet service to be running ....
	I1103 20:31:25.074821   12790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:31:25.085952   12790 system_svc.go:56] duration metric: took 11.186308ms WaitForService to wait for kubelet.
	I1103 20:31:25.085974   12790 kubeadm.go:581] duration metric: took 1m15.355795705s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1103 20:31:25.085996   12790 node_conditions.go:102] verifying NodePressure condition ...
	I1103 20:31:25.088363   12790 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1103 20:31:25.088389   12790 node_conditions.go:123] node cpu capacity is 8
	I1103 20:31:25.088407   12790 node_conditions.go:105] duration metric: took 2.399997ms to run NodePressure ...
	I1103 20:31:25.088417   12790 start.go:228] waiting for startup goroutines ...
	I1103 20:31:25.334040   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:25.495399   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:25.836894   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:25.994848   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:26.334345   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:26.495823   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:26.834258   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:26.995515   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:27.336231   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:27.494861   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:27.833963   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:27.994995   12790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1103 20:31:28.400509   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:28.495618   12790 kapi.go:107] duration metric: took 1m12.567611175s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1103 20:31:28.833576   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:29.333741   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:29.833207   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:30.335237   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:30.833526   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:31.333947   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:31.833588   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:32.333930   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:32.833304   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:33.333740   12790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1103 20:31:33.833815   12790 kapi.go:107] duration metric: took 1m17.011743722s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1103 20:31:33.835667   12790 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, inspektor-gadget, helm-tiller, metrics-server, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1103 20:31:33.836989   12790 addons.go:502] enable addons completed in 1m24.181423769s: enabled=[storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin inspektor-gadget helm-tiller metrics-server default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1103 20:31:33.837027   12790 start.go:233] waiting for cluster config update ...
	I1103 20:31:33.837056   12790 start.go:242] writing updated cluster config ...
	I1103 20:31:33.837274   12790 ssh_runner.go:195] Run: rm -f paused
	I1103 20:31:33.882900   12790 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1103 20:31:33.884872   12790 out.go:177] * Done! kubectl is now configured to use "addons-643880" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.575049117Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=7aad6d65-bb71-4cf0-b6e6-8f82f739eeed name=/runtime.v1.ImageService/PullImage
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.575789651Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=7310450c-653a-4b0d-baaf-61e35d0afbd5 name=/runtime.v1.ImageService/ImageStatus
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.576720695Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7310450c-653a-4b0d-baaf-61e35d0afbd5 name=/runtime.v1.ImageService/ImageStatus
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.577445190Z" level=info msg="Creating container: default/hello-world-app-5d77478584-4q74x/hello-world-app" id=3da0f02d-6065-48c9-b6b0-cf2e0a54eca2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.577540036Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.645812255Z" level=info msg="Created container 366dc80b17d6baabc3768922159de9a582e27b0e1cf0748677c7e87aa0a932c7: default/hello-world-app-5d77478584-4q74x/hello-world-app" id=3da0f02d-6065-48c9-b6b0-cf2e0a54eca2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.646410808Z" level=info msg="Starting container: 366dc80b17d6baabc3768922159de9a582e27b0e1cf0748677c7e87aa0a932c7" id=7395b52f-8086-4629-9c0a-baadd86ec184 name=/runtime.v1.RuntimeService/StartContainer
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.654657696Z" level=info msg="Started container" PID=10706 containerID=366dc80b17d6baabc3768922159de9a582e27b0e1cf0748677c7e87aa0a932c7 description=default/hello-world-app-5d77478584-4q74x/hello-world-app id=7395b52f-8086-4629-9c0a-baadd86ec184 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a0e7c546a1dcc69e0d6950fbca263f8ccda2387eef1a82a553ce0469225b954
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.836928274Z" level=info msg="Removing container: 261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58" id=a7e9edee-2fa4-4469-9cb0-b976f1d690e6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 03 20:34:10 addons-643880 crio[951]: time="2023-11-03 20:34:10.852379792Z" level=info msg="Removed container 261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=a7e9edee-2fa4-4469-9cb0-b976f1d690e6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 03 20:34:12 addons-643880 crio[951]: time="2023-11-03 20:34:12.420622455Z" level=info msg="Stopping container: 816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb (timeout: 2s)" id=bfcd94d2-0b96-45df-990d-bdfb3de6d6a1 name=/runtime.v1.RuntimeService/StopContainer
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.428591956Z" level=warning msg="Stopping container 816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=bfcd94d2-0b96-45df-990d-bdfb3de6d6a1 name=/runtime.v1.RuntimeService/StopContainer
	Nov 03 20:34:14 addons-643880 conmon[6609]: conmon 816ed75d0c2ad8c4e815 <ninfo>: container 6621 exited with status 137
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.572232967Z" level=info msg="Stopped container 816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb: ingress-nginx/ingress-nginx-controller-7c6974c4d8-wnmpx/controller" id=bfcd94d2-0b96-45df-990d-bdfb3de6d6a1 name=/runtime.v1.RuntimeService/StopContainer
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.572701262Z" level=info msg="Stopping pod sandbox: bb93b3007ba25f9af4ac0d56148a9af07f7af7823a957d6c5d3fc2ce413e656e" id=6993d598-8551-485b-8cb3-1deb9d1330ae name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.575462231Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-V6PG24WL5W2MQTOR - [0:0]\n:KUBE-HP-F5GZ4D63ONGI26CS - [0:0]\n-X KUBE-HP-V6PG24WL5W2MQTOR\n-X KUBE-HP-F5GZ4D63ONGI26CS\nCOMMIT\n"
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.576765400Z" level=info msg="Closing host port tcp:80"
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.576799744Z" level=info msg="Closing host port tcp:443"
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.578012143Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.578029696Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.578144983Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-wnmpx Namespace:ingress-nginx ID:bb93b3007ba25f9af4ac0d56148a9af07f7af7823a957d6c5d3fc2ce413e656e UID:e9deb307-3130-4d73-b0ca-7172858f223a NetNS:/var/run/netns/fa74de4d-ed84-4d83-a26d-08faf9d5a56f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.578256183Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-wnmpx from CNI network \"kindnet\" (type=ptp)"
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.609847206Z" level=info msg="Stopped pod sandbox: bb93b3007ba25f9af4ac0d56148a9af07f7af7823a957d6c5d3fc2ce413e656e" id=6993d598-8551-485b-8cb3-1deb9d1330ae name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.848264815Z" level=info msg="Removing container: 816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb" id=eb0ce2e3-b403-44fa-9572-1899f9cd943e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 03 20:34:14 addons-643880 crio[951]: time="2023-11-03 20:34:14.865413143Z" level=info msg="Removed container 816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb: ingress-nginx/ingress-nginx-controller-7c6974c4d8-wnmpx/controller" id=eb0ce2e3-b403-44fa-9572-1899f9cd943e name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	366dc80b17d6b       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   1a0e7c546a1dc       hello-world-app-5d77478584-4q74x
	52a23d93a251a       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4                        2 minutes ago       Running             headlamp                  0                   9ca1e08078966       headlamp-94b766c-2gpnt
	6ef6a9d2c3d0b       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   bef123ed2b784       nginx
	1cfb0db7e7b2a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   27d90f4d69197       gcp-auth-d4c87556c-qd7jz
	45b9d8206226e       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             2 minutes ago       Exited              patch                     2                   fded0b1736235       ingress-nginx-admission-patch-ldz5r
	7e8c4657368e1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   d97d673ae8d88       ingress-nginx-admission-create-gtmmm
	8ef006fb12761       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   e587e06a0b760       local-path-provisioner-78b46b4d5c-6kjc9
	ca0515c04c7b3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   d9b976e5c0521       coredns-5dd5756b68-s7nc7
	1479c1a3d4e42       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   38d32640539a2       storage-provisioner
	4448697ed5490       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             4 minutes ago       Running             kube-proxy                0                   b20c1d3b106cf       kube-proxy-52t4q
	8f3e1974c6920       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   77cb9fe2f4211       kindnet-mj9r2
	2e128682b3761       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             4 minutes ago       Running             kube-controller-manager   0                   8b8e8f5ff9101       kube-controller-manager-addons-643880
	770dfa23f338e       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             4 minutes ago       Running             kube-apiserver            0                   0d8b3c8686128       kube-apiserver-addons-643880
	b7bcbada60161       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             4 minutes ago       Running             kube-scheduler            0                   6f0703c974a55       kube-scheduler-addons-643880
	dc7d3a5edde0d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   e2fc66e76d7d1       etcd-addons-643880
	
	* 
	* ==> coredns [ca0515c04c7b38b82f9c49e411c1dd482af6cac8cf988716dd77f42556b71c08] <==
	* [INFO] 10.244.0.18:43016 - 2861 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068209s
	[INFO] 10.244.0.18:35099 - 16131 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00432333s
	[INFO] 10.244.0.18:35099 - 257 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004479173s
	[INFO] 10.244.0.18:60161 - 12916 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005047585s
	[INFO] 10.244.0.18:60161 - 30826 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005197149s
	[INFO] 10.244.0.18:35907 - 11269 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004766879s
	[INFO] 10.244.0.18:35907 - 35611 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005109568s
	[INFO] 10.244.0.18:50873 - 58876 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064584s
	[INFO] 10.244.0.18:50873 - 38910 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111586s
	[INFO] 10.244.0.20:34021 - 14305 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154975s
	[INFO] 10.244.0.20:33294 - 21617 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000200027s
	[INFO] 10.244.0.20:33299 - 49346 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130894s
	[INFO] 10.244.0.20:56560 - 37703 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120441s
	[INFO] 10.244.0.20:38529 - 7982 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000654615s
	[INFO] 10.244.0.20:57717 - 60196 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000672518s
	[INFO] 10.244.0.20:53489 - 3313 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004987884s
	[INFO] 10.244.0.20:44539 - 49512 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005192907s
	[INFO] 10.244.0.20:50521 - 65099 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004830225s
	[INFO] 10.244.0.20:32959 - 1369 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006447769s
	[INFO] 10.244.0.20:58700 - 46895 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006299896s
	[INFO] 10.244.0.20:51377 - 13144 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006964995s
	[INFO] 10.244.0.20:54400 - 48972 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000636941s
	[INFO] 10.244.0.20:51308 - 9464 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000700924s
	[INFO] 10.244.0.21:48387 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095572s
	[INFO] 10.244.0.21:44546 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080165s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-643880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-643880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=44765b58c8440feed3c9edc110a2d06dc722956e
	                    minikube.k8s.io/name=addons-643880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_03T20_29_58_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-643880
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Nov 2023 20:29:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-643880
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Nov 2023 20:34:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Nov 2023 20:33:02 +0000   Fri, 03 Nov 2023 20:29:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Nov 2023 20:33:02 +0000   Fri, 03 Nov 2023 20:29:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Nov 2023 20:33:02 +0000   Fri, 03 Nov 2023 20:29:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Nov 2023 20:33:02 +0000   Fri, 03 Nov 2023 20:30:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-643880
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 9264bc99a3df4396bfbac741e0a92d95
	  System UUID:                7c29a6a7-5aac-46b3-9b37-53a03ee3c889
	  Boot ID:                    399e003d-4e5c-4eac-b4ee-6a616fb3f737
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-4q74x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-qd7jz                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  headlamp                    headlamp-94b766c-2gpnt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 coredns-5dd5756b68-s7nc7                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m8s
	  kube-system                 etcd-addons-643880                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m22s
	  kube-system                 kindnet-mj9r2                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m9s
	  kube-system                 kube-apiserver-addons-643880               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-addons-643880      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-proxy-52t4q                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-scheduler-addons-643880               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  local-path-storage          local-path-provisioner-78b46b4d5c-6kjc9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m28s (x8 over 4m28s)  kubelet          Node addons-643880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s (x8 over 4m28s)  kubelet          Node addons-643880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s (x8 over 4m28s)  kubelet          Node addons-643880 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s                  kubelet          Node addons-643880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s                  kubelet          Node addons-643880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s                  kubelet          Node addons-643880 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m10s                  node-controller  Node addons-643880 event: Registered Node addons-643880 in Controller
	  Normal  NodeReady                3m35s                  kubelet          Node addons-643880 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.009153] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004083] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.004169] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.002543] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001322] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000999] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000810] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000745] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000923] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000815] platform eisa.0: Cannot allocate resource for EISA slot 8
	[ +10.000941] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 3 20:31] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 96 0a 55 cb 68 de b2 40 3c f2 c9 9f 08 00
	[  +1.004080] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 0a 55 cb 68 de b2 40 3c f2 c9 9f 08 00
	[Nov 3 20:32] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 0a 55 cb 68 de b2 40 3c f2 c9 9f 08 00
	[  +4.063604] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 0a 55 cb 68 de b2 40 3c f2 c9 9f 08 00
	[  +8.195199] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 0a 55 cb 68 de b2 40 3c f2 c9 9f 08 00
	[ +16.122463] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 0a 55 cb 68 de b2 40 3c f2 c9 9f 08 00
	[Nov 3 20:33] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 0a 55 cb 68 de b2 40 3c f2 c9 9f 08 00
	
	* 
	* ==> etcd [dc7d3a5edde0d5a64f7113337ae24a8e5d7253695ce00a087712c9bb72fe18d6] <==
	* {"level":"info","ts":"2023-11-03T20:30:13.696196Z","caller":"traceutil/trace.go:171","msg":"trace[1269177826] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"186.038354ms","start":"2023-11-03T20:30:13.510152Z","end":"2023-11-03T20:30:13.69619Z","steps":["trace[1269177826] 'process raft request'  (duration: 185.226456ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-03T20:30:13.696331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.672792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:317"}
	{"level":"info","ts":"2023-11-03T20:30:13.696362Z","caller":"traceutil/trace.go:171","msg":"trace[213095075] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:426; }","duration":"183.715519ms","start":"2023-11-03T20:30:13.512638Z","end":"2023-11-03T20:30:13.696354Z","steps":["trace[213095075] 'agreement among raft nodes before linearized reading'  (duration: 183.642241ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-03T20:30:13.697163Z","caller":"traceutil/trace.go:171","msg":"trace[402204078] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"100.604444ms","start":"2023-11-03T20:30:13.596546Z","end":"2023-11-03T20:30:13.697151Z","steps":["trace[402204078] 'process raft request'  (duration: 100.358681ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-03T20:30:13.697333Z","caller":"traceutil/trace.go:171","msg":"trace[1709771027] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"100.593155ms","start":"2023-11-03T20:30:13.59673Z","end":"2023-11-03T20:30:13.697324Z","steps":["trace[1709771027] 'process raft request'  (duration: 100.227507ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-03T20:30:13.697529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.119702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3144"}
	{"level":"info","ts":"2023-11-03T20:30:13.697582Z","caller":"traceutil/trace.go:171","msg":"trace[1037638506] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:428; }","duration":"100.175493ms","start":"2023-11-03T20:30:13.597399Z","end":"2023-11-03T20:30:13.697575Z","steps":["trace[1037638506] 'agreement among raft nodes before linearized reading'  (duration: 100.076494ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-03T20:30:13.995725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.19774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-03T20:30:13.995878Z","caller":"traceutil/trace.go:171","msg":"trace[246359775] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:445; }","duration":"100.359359ms","start":"2023-11-03T20:30:13.895504Z","end":"2023-11-03T20:30:13.995863Z","steps":["trace[246359775] 'range keys from in-memory index tree'  (duration: 86.164114ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-03T20:30:14.510715Z","caller":"traceutil/trace.go:171","msg":"trace[360908454] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"100.800354ms","start":"2023-11-03T20:30:14.40989Z","end":"2023-11-03T20:30:14.51069Z","steps":["trace[360908454] 'process raft request'  (duration: 100.762234ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-03T20:30:14.511012Z","caller":"traceutil/trace.go:171","msg":"trace[1269102878] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"101.698317ms","start":"2023-11-03T20:30:14.409302Z","end":"2023-11-03T20:30:14.511Z","steps":["trace[1269102878] 'process raft request'  (duration: 87.811946ms)","trace[1269102878] 'compare'  (duration: 13.439997ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-03T20:30:14.511109Z","caller":"traceutil/trace.go:171","msg":"trace[1655831959] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:494; }","duration":"101.568584ms","start":"2023-11-03T20:30:14.409533Z","end":"2023-11-03T20:30:14.511102Z","steps":["trace[1655831959] 'read index received'  (duration: 88.469322ms)","trace[1655831959] 'applied index is now lower than readState.Index'  (duration: 13.09688ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-03T20:30:14.511247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.713943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/registry-proxy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-03T20:30:14.511274Z","caller":"traceutil/trace.go:171","msg":"trace[1551601738] range","detail":"{range_begin:/registry/daemonsets/kube-system/registry-proxy; range_end:; response_count:0; response_revision:483; }","duration":"101.749897ms","start":"2023-11-03T20:30:14.409514Z","end":"2023-11-03T20:30:14.511264Z","steps":["trace[1551601738] 'agreement among raft nodes before linearized reading'  (duration: 101.690945ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-03T20:30:14.511816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-03T20:30:14.511846Z","caller":"traceutil/trace.go:171","msg":"trace[1541860320] range","detail":"{range_begin:/registry/services/specs/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:488; }","duration":"102.161826ms","start":"2023-11-03T20:30:14.409675Z","end":"2023-11-03T20:30:14.511837Z","steps":["trace[1541860320] 'agreement among raft nodes before linearized reading'  (duration: 102.116109ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-03T20:30:14.5126Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.154859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2023-11-03T20:30:14.512628Z","caller":"traceutil/trace.go:171","msg":"trace[134631407] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:488; }","duration":"101.184217ms","start":"2023-11-03T20:30:14.411435Z","end":"2023-11-03T20:30:14.512619Z","steps":["trace[134631407] 'agreement among raft nodes before linearized reading'  (duration: 101.132678ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-03T20:30:14.512728Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.727918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-03T20:30:14.512747Z","caller":"traceutil/trace.go:171","msg":"trace[516525035] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:488; }","duration":"102.748279ms","start":"2023-11-03T20:30:14.409993Z","end":"2023-11-03T20:30:14.512741Z","steps":["trace[516525035] 'agreement among raft nodes before linearized reading'  (duration: 102.712472ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-03T20:30:14.512849Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.065103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-643880\" ","response":"range_response_count:1 size:5661"}
	{"level":"info","ts":"2023-11-03T20:30:14.512873Z","caller":"traceutil/trace.go:171","msg":"trace[1528004739] range","detail":"{range_begin:/registry/minions/addons-643880; range_end:; response_count:1; response_revision:488; }","duration":"103.08952ms","start":"2023-11-03T20:30:14.409777Z","end":"2023-11-03T20:30:14.512867Z","steps":["trace[1528004739] 'agreement among raft nodes before linearized reading'  (duration: 103.043418ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-03T20:31:36.682612Z","caller":"traceutil/trace.go:171","msg":"trace[958169234] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"111.371968ms","start":"2023-11-03T20:31:36.571222Z","end":"2023-11-03T20:31:36.682594Z","steps":["trace[958169234] 'process raft request'  (duration: 111.1822ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-03T20:31:36.807454Z","caller":"traceutil/trace.go:171","msg":"trace[1997278542] transaction","detail":"{read_only:false; response_revision:1178; number_of_response:1; }","duration":"121.825712ms","start":"2023-11-03T20:31:36.685612Z","end":"2023-11-03T20:31:36.807437Z","steps":["trace[1997278542] 'process raft request'  (duration: 56.11627ms)","trace[1997278542] 'compare'  (duration: 65.622518ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-03T20:31:59.41123Z","caller":"traceutil/trace.go:171","msg":"trace[1328177496] transaction","detail":"{read_only:false; response_revision:1392; number_of_response:1; }","duration":"138.965942ms","start":"2023-11-03T20:31:59.272239Z","end":"2023-11-03T20:31:59.411205Z","steps":["trace[1328177496] 'process raft request'  (duration: 138.776348ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [1cfb0db7e7b2a5afee72a49e5fa801121f2299470eb5b73f583596cf4dc12991] <==
	* 2023/11/03 20:31:22 GCP Auth Webhook started!
	2023/11/03 20:31:44 Ready to marshal response ...
	2023/11/03 20:31:44 Ready to write response ...
	2023/11/03 20:31:44 Ready to marshal response ...
	2023/11/03 20:31:44 Ready to write response ...
	2023/11/03 20:31:48 Ready to marshal response ...
	2023/11/03 20:31:48 Ready to write response ...
	2023/11/03 20:31:48 Ready to marshal response ...
	2023/11/03 20:31:48 Ready to write response ...
	2023/11/03 20:31:48 Ready to marshal response ...
	2023/11/03 20:31:48 Ready to write response ...
	2023/11/03 20:32:00 Ready to marshal response ...
	2023/11/03 20:32:00 Ready to write response ...
	2023/11/03 20:32:01 Ready to marshal response ...
	2023/11/03 20:32:01 Ready to write response ...
	2023/11/03 20:32:01 Ready to marshal response ...
	2023/11/03 20:32:01 Ready to write response ...
	2023/11/03 20:32:01 Ready to marshal response ...
	2023/11/03 20:32:01 Ready to write response ...
	2023/11/03 20:32:25 Ready to marshal response ...
	2023/11/03 20:32:25 Ready to write response ...
	2023/11/03 20:32:59 Ready to marshal response ...
	2023/11/03 20:32:59 Ready to write response ...
	2023/11/03 20:34:09 Ready to marshal response ...
	2023/11/03 20:34:09 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:34:19 up 16 min,  0 users,  load average: 0.69, 0.61, 0.29
	Linux addons-643880 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [8f3e1974c6920b94a097c9c33ae5a280074a77f0559fcac89fd2fedfa262b5d6] <==
	* I1103 20:32:14.332854       1 main.go:227] handling current node
	I1103 20:32:24.344974       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:32:24.344996       1 main.go:227] handling current node
	I1103 20:32:34.355474       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:32:34.355495       1 main.go:227] handling current node
	I1103 20:32:44.359245       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:32:44.359265       1 main.go:227] handling current node
	I1103 20:32:54.371417       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:32:54.371440       1 main.go:227] handling current node
	I1103 20:33:04.381670       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:33:04.381700       1 main.go:227] handling current node
	I1103 20:33:14.385859       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:33:14.385883       1 main.go:227] handling current node
	I1103 20:33:24.397431       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:33:24.397453       1 main.go:227] handling current node
	I1103 20:33:34.408460       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:33:34.408496       1 main.go:227] handling current node
	I1103 20:33:44.412176       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:33:44.412200       1 main.go:227] handling current node
	I1103 20:33:54.423575       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:33:54.423598       1 main.go:227] handling current node
	I1103 20:34:04.429459       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:34:04.429486       1 main.go:227] handling current node
	I1103 20:34:14.432929       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:34:14.432949       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [770dfa23f338e302d0146da259ee43faa7575ca773f27055b05d04ee8778815e] <==
	* E1103 20:31:47.717350       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.22:59330: read: connection reset by peer
	I1103 20:31:47.833445       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1103 20:31:48.358548       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.58.109"}
	I1103 20:31:51.031982       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1103 20:32:01.248905       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.46.0"}
	I1103 20:32:36.901555       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1103 20:33:15.020310       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1103 20:33:15.020474       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1103 20:33:15.025829       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1103 20:33:15.025898       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1103 20:33:15.038170       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1103 20:33:15.038212       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1103 20:33:15.046023       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1103 20:33:15.046135       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1103 20:33:15.050314       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1103 20:33:15.050385       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1103 20:33:15.055219       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1103 20:33:15.055325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1103 20:33:15.088941       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1103 20:33:15.089344       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1103 20:33:16.038631       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1103 20:33:16.089509       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1103 20:33:16.101871       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1103 20:34:09.458579       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.110.96"}
	E1103 20:34:11.500066       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [2e128682b37610d74ff04e0903f0dbf201db63c97e05787cf4fed2284af2f6d0] <==
	* E1103 20:33:36.801426       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1103 20:33:38.881056       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1103 20:33:38.881082       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1103 20:33:39.617778       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1103 20:33:39.617810       1 shared_informer.go:318] Caches are synced for resource quota
	I1103 20:33:39.919533       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1103 20:33:39.919574       1 shared_informer.go:318] Caches are synced for garbage collector
	W1103 20:33:55.195217       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1103 20:33:55.195249       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1103 20:33:56.915264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1103 20:33:56.915291       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1103 20:33:59.365670       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1103 20:33:59.365702       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1103 20:34:09.306235       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1103 20:34:09.316310       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-4q74x"
	I1103 20:34:09.321048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.845893ms"
	I1103 20:34:09.330282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.075079ms"
	I1103 20:34:09.330362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.332µs"
	I1103 20:34:10.862377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.776535ms"
	I1103 20:34:10.862497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.589µs"
	I1103 20:34:11.406703       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1103 20:34:11.408488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="9.486µs"
	I1103 20:34:11.413921       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1103 20:34:14.482626       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1103 20:34:14.482662       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [4448697ed5490be30ce62f083e87a9978d2f3bfa550ba8e4c16dfb4c0eee8bc9] <==
	* I1103 20:30:13.807168       1 server_others.go:69] "Using iptables proxy"
	I1103 20:30:14.013647       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1103 20:30:14.500347       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1103 20:30:14.504492       1 server_others.go:152] "Using iptables Proxier"
	I1103 20:30:14.504532       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1103 20:30:14.504542       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1103 20:30:14.504577       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1103 20:30:14.504863       1 server.go:846] "Version info" version="v1.28.3"
	I1103 20:30:14.504882       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1103 20:30:14.505430       1 config.go:188] "Starting service config controller"
	I1103 20:30:14.506177       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1103 20:30:14.505470       1 config.go:97] "Starting endpoint slice config controller"
	I1103 20:30:14.506235       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1103 20:30:14.505979       1 config.go:315] "Starting node config controller"
	I1103 20:30:14.506248       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1103 20:30:14.607988       1 shared_informer.go:318] Caches are synced for node config
	I1103 20:30:14.607989       1 shared_informer.go:318] Caches are synced for service config
	I1103 20:30:14.608004       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [b7bcbada6016174319f22faefa846895f23a5a8be2f5bca1c987869629237049] <==
	* W1103 20:29:54.511101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1103 20:29:54.588604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1103 20:29:54.510531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1103 20:29:54.511032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1103 20:29:54.588694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1103 20:29:54.511193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1103 20:29:54.588725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1103 20:29:54.511231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1103 20:29:54.588743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1103 20:29:54.588768       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1103 20:29:54.511346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1103 20:29:54.588873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1103 20:29:54.511452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1103 20:29:54.588940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1103 20:29:54.511284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1103 20:29:54.589001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1103 20:29:55.346844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1103 20:29:55.346878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1103 20:29:55.416399       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1103 20:29:55.416448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1103 20:29:55.570214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1103 20:29:55.570249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1103 20:29:55.633924       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1103 20:29:55.633978       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1103 20:29:58.606255       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 03 20:34:09 addons-643880 kubelet[1557]: I1103 20:34:09.484545    1557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/56d10cec-e4fb-4d23-9641-abeb10245e39-gcp-creds\") pod \"hello-world-app-5d77478584-4q74x\" (UID: \"56d10cec-e4fb-4d23-9641-abeb10245e39\") " pod="default/hello-world-app-5d77478584-4q74x"
	Nov 03 20:34:09 addons-643880 kubelet[1557]: I1103 20:34:09.484636    1557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkvcj\" (UniqueName: \"kubernetes.io/projected/56d10cec-e4fb-4d23-9641-abeb10245e39-kube-api-access-fkvcj\") pod \"hello-world-app-5d77478584-4q74x\" (UID: \"56d10cec-e4fb-4d23-9641-abeb10245e39\") " pod="default/hello-world-app-5d77478584-4q74x"
	Nov 03 20:34:09 addons-643880 kubelet[1557]: W1103 20:34:09.717188    1557 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4c5cae6311a95ca11574f6de7dfe1ad7ff87ed607511443d33a6d2afd8e712f0/crio-1a0e7c546a1dcc69e0d6950fbca263f8ccda2387eef1a82a553ce0469225b954 WatchSource:0}: Error finding container 1a0e7c546a1dcc69e0d6950fbca263f8ccda2387eef1a82a553ce0469225b954: Status 404 returned error can't find the container with id 1a0e7c546a1dcc69e0d6950fbca263f8ccda2387eef1a82a553ce0469225b954
	Nov 03 20:34:10 addons-643880 kubelet[1557]: I1103 20:34:10.693630    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8dvn\" (UniqueName: \"kubernetes.io/projected/f792e6e0-5eb0-497f-804c-15f24d0fd4ad-kube-api-access-s8dvn\") pod \"f792e6e0-5eb0-497f-804c-15f24d0fd4ad\" (UID: \"f792e6e0-5eb0-497f-804c-15f24d0fd4ad\") "
	Nov 03 20:34:10 addons-643880 kubelet[1557]: I1103 20:34:10.695506    1557 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f792e6e0-5eb0-497f-804c-15f24d0fd4ad-kube-api-access-s8dvn" (OuterVolumeSpecName: "kube-api-access-s8dvn") pod "f792e6e0-5eb0-497f-804c-15f24d0fd4ad" (UID: "f792e6e0-5eb0-497f-804c-15f24d0fd4ad"). InnerVolumeSpecName "kube-api-access-s8dvn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 03 20:34:10 addons-643880 kubelet[1557]: I1103 20:34:10.794921    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s8dvn\" (UniqueName: \"kubernetes.io/projected/f792e6e0-5eb0-497f-804c-15f24d0fd4ad-kube-api-access-s8dvn\") on node \"addons-643880\" DevicePath \"\""
	Nov 03 20:34:10 addons-643880 kubelet[1557]: I1103 20:34:10.835336    1557 scope.go:117] "RemoveContainer" containerID="261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58"
	Nov 03 20:34:10 addons-643880 kubelet[1557]: I1103 20:34:10.852625    1557 scope.go:117] "RemoveContainer" containerID="261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58"
	Nov 03 20:34:10 addons-643880 kubelet[1557]: E1103 20:34:10.852983    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58\": container with ID starting with 261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58 not found: ID does not exist" containerID="261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58"
	Nov 03 20:34:10 addons-643880 kubelet[1557]: I1103 20:34:10.853027    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58"} err="failed to get container status \"261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58\": rpc error: code = NotFound desc = could not find container \"261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58\": container with ID starting with 261132ce1c7f7f429d09174dd04da634fb5c3d15dea2543397073eb9ce524d58 not found: ID does not exist"
	Nov 03 20:34:10 addons-643880 kubelet[1557]: I1103 20:34:10.857067    1557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-4q74x" podStartSLOduration=1.002196932 podCreationTimestamp="2023-11-03 20:34:09 +0000 UTC" firstStartedPulling="2023-11-03 20:34:09.72047387 +0000 UTC m=+252.286221241" lastFinishedPulling="2023-11-03 20:34:10.575308566 +0000 UTC m=+253.141055936" observedRunningTime="2023-11-03 20:34:10.856840764 +0000 UTC m=+253.422588148" watchObservedRunningTime="2023-11-03 20:34:10.857031627 +0000 UTC m=+253.422779010"
	Nov 03 20:34:11 addons-643880 kubelet[1557]: I1103 20:34:11.520974    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="86e97af3-2e73-4d0f-9b7b-40f57e7fbf56" path="/var/lib/kubelet/pods/86e97af3-2e73-4d0f-9b7b-40f57e7fbf56/volumes"
	Nov 03 20:34:11 addons-643880 kubelet[1557]: I1103 20:34:11.521468    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f557ddaa-61f5-47be-a044-90ad96f06a8b" path="/var/lib/kubelet/pods/f557ddaa-61f5-47be-a044-90ad96f06a8b/volumes"
	Nov 03 20:34:11 addons-643880 kubelet[1557]: I1103 20:34:11.521915    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f792e6e0-5eb0-497f-804c-15f24d0fd4ad" path="/var/lib/kubelet/pods/f792e6e0-5eb0-497f-804c-15f24d0fd4ad/volumes"
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.719264    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n77z\" (UniqueName: \"kubernetes.io/projected/e9deb307-3130-4d73-b0ca-7172858f223a-kube-api-access-5n77z\") pod \"e9deb307-3130-4d73-b0ca-7172858f223a\" (UID: \"e9deb307-3130-4d73-b0ca-7172858f223a\") "
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.719318    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9deb307-3130-4d73-b0ca-7172858f223a-webhook-cert\") pod \"e9deb307-3130-4d73-b0ca-7172858f223a\" (UID: \"e9deb307-3130-4d73-b0ca-7172858f223a\") "
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.720970    1557 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9deb307-3130-4d73-b0ca-7172858f223a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e9deb307-3130-4d73-b0ca-7172858f223a" (UID: "e9deb307-3130-4d73-b0ca-7172858f223a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.721226    1557 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9deb307-3130-4d73-b0ca-7172858f223a-kube-api-access-5n77z" (OuterVolumeSpecName: "kube-api-access-5n77z") pod "e9deb307-3130-4d73-b0ca-7172858f223a" (UID: "e9deb307-3130-4d73-b0ca-7172858f223a"). InnerVolumeSpecName "kube-api-access-5n77z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.819584    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5n77z\" (UniqueName: \"kubernetes.io/projected/e9deb307-3130-4d73-b0ca-7172858f223a-kube-api-access-5n77z\") on node \"addons-643880\" DevicePath \"\""
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.819625    1557 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9deb307-3130-4d73-b0ca-7172858f223a-webhook-cert\") on node \"addons-643880\" DevicePath \"\""
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.847353    1557 scope.go:117] "RemoveContainer" containerID="816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb"
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.865622    1557 scope.go:117] "RemoveContainer" containerID="816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb"
	Nov 03 20:34:14 addons-643880 kubelet[1557]: E1103 20:34:14.865960    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb\": container with ID starting with 816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb not found: ID does not exist" containerID="816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb"
	Nov 03 20:34:14 addons-643880 kubelet[1557]: I1103 20:34:14.866008    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb"} err="failed to get container status \"816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb\": rpc error: code = NotFound desc = could not find container \"816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb\": container with ID starting with 816ed75d0c2ad8c4e81590af1fb4106308f7fede1e0a1d6d70ef88777edcd5bb not found: ID does not exist"
	Nov 03 20:34:15 addons-643880 kubelet[1557]: I1103 20:34:15.519976    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e9deb307-3130-4d73-b0ca-7172858f223a" path="/var/lib/kubelet/pods/e9deb307-3130-4d73-b0ca-7172858f223a/volumes"
	
	* 
	* ==> storage-provisioner [1479c1a3d4e4256e2b82575c0ff09db411c78b1472611be8d2257ab36e9fedfe] <==
	* I1103 20:30:45.791476       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1103 20:30:45.801259       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1103 20:30:45.801427       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1103 20:30:45.811254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1103 20:30:45.811407       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-643880_3d04efdc-604b-481f-81be-3f02c561524b!
	I1103 20:30:45.811902       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e60ecc9-4e71-42da-86a7-898e156f7a0c", APIVersion:"v1", ResourceVersion:"890", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-643880_3d04efdc-604b-481f-81be-3f02c561524b became leader
	I1103 20:30:45.913127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-643880_3d04efdc-604b-481f-81be-3f02c561524b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-643880 -n addons-643880
helpers_test.go:261: (dbg) Run:  kubectl --context addons-643880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.636191084s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 image ls: (2.255301621s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-573959" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (178.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-656945 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-656945 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.200176545s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-656945 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-656945 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [63cab41a-70d2-4980-8810-9be2f5d34fd0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [63cab41a-70d2-4980-8810-9be2f5d34fd0] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.006749657s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-656945 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1103 20:41:33.902685   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:42:01.587428   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-656945 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.817154325s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-656945 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-656945 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.006203641s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-656945 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-656945 addons disable ingress-dns --alsologtostderr -v=1: (1.645230278s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-656945 addons disable ingress --alsologtostderr -v=1
E1103 20:42:34.415086   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:34.420398   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:34.430645   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:34.450911   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:34.491139   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:34.571404   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:34.731842   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:35.052389   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:35.693356   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:36.973848   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-656945 addons disable ingress --alsologtostderr -v=1: (7.383367826s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-656945
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-656945:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f5315efa5bf5f76f89badf80216a2f1bee1f04489ca68d1e7178de2fc941740",
	        "Created": "2023-11-03T20:38:37.583769743Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53144,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-03T20:38:37.87300294Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:efd86a3765897881549ab05896b96b2b4ff17749f0a64fb6c355478ceebc8b47",
	        "ResolvConfPath": "/var/lib/docker/containers/8f5315efa5bf5f76f89badf80216a2f1bee1f04489ca68d1e7178de2fc941740/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f5315efa5bf5f76f89badf80216a2f1bee1f04489ca68d1e7178de2fc941740/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f5315efa5bf5f76f89badf80216a2f1bee1f04489ca68d1e7178de2fc941740/hosts",
	        "LogPath": "/var/lib/docker/containers/8f5315efa5bf5f76f89badf80216a2f1bee1f04489ca68d1e7178de2fc941740/8f5315efa5bf5f76f89badf80216a2f1bee1f04489ca68d1e7178de2fc941740-json.log",
	        "Name": "/ingress-addon-legacy-656945",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-656945:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-656945",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/58a329204cbd175ffdbbdc9c56c4122d954f4ba69c6ed0d8924b43818d3c3ec7-init/diff:/var/lib/docker/overlay2/10f966e66ad11ebf0563dbe6bde99d657b975224ac619c4daa8db5a19a2b3420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a329204cbd175ffdbbdc9c56c4122d954f4ba69c6ed0d8924b43818d3c3ec7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a329204cbd175ffdbbdc9c56c4122d954f4ba69c6ed0d8924b43818d3c3ec7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a329204cbd175ffdbbdc9c56c4122d954f4ba69c6ed0d8924b43818d3c3ec7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-656945",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-656945/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-656945",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-656945",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-656945",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "68c9586a34eb95f57783709f962e965d8feda51e70c8c0a3a87d5fee674ec79a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/68c9586a34eb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-656945": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8f5315efa5bf",
	                        "ingress-addon-legacy-656945"
	                    ],
	                    "NetworkID": "e8dfd49f34ae883dd8bc4eabf72257f43b6e1e0332bf86716ca2503656e91caf",
	                    "EndpointID": "8bdc7f886dc1355b08c706044dc64cbd26a06c387bd864815f5d92d39c601dde",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-656945 -n ingress-addon-legacy-656945
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-656945 logs -n 25
E1103 20:42:39.534161   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-656945 logs -n 25: (1.02137551s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	| update-context | functional-573959                                                            | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | update-context                                                               |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |                |                     |                     |
	| update-context | functional-573959                                                            | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | update-context                                                               |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |                |                     |                     |
	| update-context | functional-573959                                                            | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | update-context                                                               |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |                |                     |                     |
	| image          | functional-573959 image ls                                                   | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	| image          | functional-573959 image save                                                 | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-573959                     |                             |         |                |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	| image          | functional-573959 image rm                                                   | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-573959                     |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	| image          | functional-573959 image ls                                                   | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	| image          | functional-573959 image load                                                 | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	| image          | functional-573959 image ls                                                   | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	| image          | functional-573959 image save --daemon                                        | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-573959                     |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	| image          | functional-573959                                                            | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | image ls --format short                                                      |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	| image          | functional-573959                                                            | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | image ls --format json                                                       |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	| image          | functional-573959                                                            | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | image ls --format yaml                                                       |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	| image          | functional-573959                                                            | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | image ls --format table                                                      |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	| ssh            | functional-573959 ssh pgrep                                                  | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC |                     |
	|                | buildkitd                                                                    |                             |         |                |                     |                     |
	| image          | functional-573959 image build -t                                             | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	|                | localhost/my-image:functional-573959                                         |                             |         |                |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |                |                     |                     |
	| image          | functional-573959 image ls                                                   | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	| delete         | -p functional-573959                                                         | functional-573959           | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:38 UTC |
	| start          | -p ingress-addon-legacy-656945                                               | ingress-addon-legacy-656945 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:38 UTC | 03 Nov 23 20:39 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |                |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |                |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |                |                     |                     |
	|                | --container-runtime=crio                                                     |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-656945                                                  | ingress-addon-legacy-656945 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:39 UTC | 03 Nov 23 20:39 UTC |
	|                | addons enable ingress                                                        |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-656945                                                  | ingress-addon-legacy-656945 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:39 UTC | 03 Nov 23 20:39 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |                |                     |                     |
	| ssh            | ingress-addon-legacy-656945                                                  | ingress-addon-legacy-656945 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:40 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |                |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |                |                     |                     |
	| ip             | ingress-addon-legacy-656945 ip                                               | ingress-addon-legacy-656945 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:42 UTC | 03 Nov 23 20:42 UTC |
	| addons         | ingress-addon-legacy-656945                                                  | ingress-addon-legacy-656945 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:42 UTC | 03 Nov 23 20:42 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-656945                                                  | ingress-addon-legacy-656945 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:42 UTC | 03 Nov 23 20:42 UTC |
	|                | addons disable ingress                                                       |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |                |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/03 20:38:25
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1103 20:38:25.099683   52527 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:38:25.099958   52527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:38:25.099967   52527 out.go:309] Setting ErrFile to fd 2...
	I1103 20:38:25.099972   52527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:38:25.100203   52527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 20:38:25.100845   52527 out.go:303] Setting JSON to false
	I1103 20:38:25.101934   52527 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1255,"bootTime":1699042650,"procs":491,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 20:38:25.101993   52527 start.go:138] virtualization: kvm guest
	I1103 20:38:25.104286   52527 out.go:177] * [ingress-addon-legacy-656945] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 20:38:25.105721   52527 out.go:177]   - MINIKUBE_LOCATION=17545
	I1103 20:38:25.107041   52527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 20:38:25.105736   52527 notify.go:220] Checking for updates...
	I1103 20:38:25.109536   52527 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:38:25.110916   52527 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 20:38:25.112204   52527 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1103 20:38:25.113461   52527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1103 20:38:25.114825   52527 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 20:38:25.135720   52527 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 20:38:25.135801   52527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:38:25.185336   52527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-11-03 20:38:25.176206378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:38:25.185423   52527 docker.go:295] overlay module found
	I1103 20:38:25.187328   52527 out.go:177] * Using the docker driver based on user configuration
	I1103 20:38:25.188635   52527 start.go:298] selected driver: docker
	I1103 20:38:25.188648   52527 start.go:902] validating driver "docker" against <nil>
	I1103 20:38:25.188659   52527 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1103 20:38:25.189450   52527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:38:25.240245   52527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-11-03 20:38:25.231110357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:38:25.240403   52527 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1103 20:38:25.240653   52527 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1103 20:38:25.242554   52527 out.go:177] * Using Docker driver with root privileges
	I1103 20:38:25.243970   52527 cni.go:84] Creating CNI manager for ""
	I1103 20:38:25.243991   52527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1103 20:38:25.244003   52527 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1103 20:38:25.244015   52527 start_flags.go:323] config:
	{Name:ingress-addon-legacy-656945 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-656945 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:38:25.245637   52527 out.go:177] * Starting control plane node ingress-addon-legacy-656945 in cluster ingress-addon-legacy-656945
	I1103 20:38:25.246971   52527 cache.go:121] Beginning downloading kic base image for docker with crio
	I1103 20:38:25.248341   52527 out.go:177] * Pulling base image ...
	I1103 20:38:25.249731   52527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1103 20:38:25.249761   52527 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon
	I1103 20:38:25.265497   52527 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon, skipping pull
	I1103 20:38:25.265518   52527 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 exists in daemon, skipping load
	I1103 20:38:25.284638   52527 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1103 20:38:25.284669   52527 cache.go:56] Caching tarball of preloaded images
	I1103 20:38:25.284791   52527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1103 20:38:25.286465   52527 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1103 20:38:25.287892   52527 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:38:25.321916   52527 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1103 20:38:29.269929   52527 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:38:29.270022   52527 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:38:30.271455   52527 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1103 20:38:30.271815   52527 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/config.json ...
	I1103 20:38:30.271844   52527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/config.json: {Name:mkd0dd16598b1f2784cc1590f9af613d5cc7bcf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:38:30.272023   52527 cache.go:194] Successfully downloaded all kic artifacts
	I1103 20:38:30.272050   52527 start.go:365] acquiring machines lock for ingress-addon-legacy-656945: {Name:mk050a8a5250ad4103c4edf96fb77c6ce8710956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:38:30.272099   52527 start.go:369] acquired machines lock for "ingress-addon-legacy-656945" in 36.906µs
	I1103 20:38:30.272117   52527 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-656945 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-656945 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1103 20:38:30.272179   52527 start.go:125] createHost starting for "" (driver="docker")
	I1103 20:38:30.274267   52527 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1103 20:38:30.274454   52527 start.go:159] libmachine.API.Create for "ingress-addon-legacy-656945" (driver="docker")
	I1103 20:38:30.274479   52527 client.go:168] LocalClient.Create starting
	I1103 20:38:30.274531   52527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem
	I1103 20:38:30.274559   52527 main.go:141] libmachine: Decoding PEM data...
	I1103 20:38:30.274575   52527 main.go:141] libmachine: Parsing certificate...
	I1103 20:38:30.274624   52527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem
	I1103 20:38:30.274642   52527 main.go:141] libmachine: Decoding PEM data...
	I1103 20:38:30.274651   52527 main.go:141] libmachine: Parsing certificate...
	I1103 20:38:30.274922   52527 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-656945 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1103 20:38:30.290296   52527 cli_runner.go:211] docker network inspect ingress-addon-legacy-656945 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1103 20:38:30.290358   52527 network_create.go:281] running [docker network inspect ingress-addon-legacy-656945] to gather additional debugging logs...
	I1103 20:38:30.290375   52527 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-656945
	W1103 20:38:30.304592   52527 cli_runner.go:211] docker network inspect ingress-addon-legacy-656945 returned with exit code 1
	I1103 20:38:30.304625   52527 network_create.go:284] error running [docker network inspect ingress-addon-legacy-656945]: docker network inspect ingress-addon-legacy-656945: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-656945 not found
	I1103 20:38:30.304644   52527 network_create.go:286] output of [docker network inspect ingress-addon-legacy-656945]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-656945 not found
	
	** /stderr **
	I1103 20:38:30.304732   52527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1103 20:38:30.319134   52527 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000151c0}
	I1103 20:38:30.319174   52527 network_create.go:124] attempt to create docker network ingress-addon-legacy-656945 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1103 20:38:30.319215   52527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-656945 ingress-addon-legacy-656945
	I1103 20:38:30.366375   52527 network_create.go:108] docker network ingress-addon-legacy-656945 192.168.49.0/24 created
	I1103 20:38:30.366411   52527 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-656945" container
	I1103 20:38:30.366469   52527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1103 20:38:30.381851   52527 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-656945 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-656945 --label created_by.minikube.sigs.k8s.io=true
	I1103 20:38:30.399640   52527 oci.go:103] Successfully created a docker volume ingress-addon-legacy-656945
	I1103 20:38:30.399730   52527 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-656945-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-656945 --entrypoint /usr/bin/test -v ingress-addon-legacy-656945:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -d /var/lib
	I1103 20:38:32.108644   52527 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-656945-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-656945 --entrypoint /usr/bin/test -v ingress-addon-legacy-656945:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -d /var/lib: (1.708862815s)
	I1103 20:38:32.108674   52527 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-656945
	I1103 20:38:32.108691   52527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1103 20:38:32.108713   52527 kic.go:194] Starting extracting preloaded images to volume ...
	I1103 20:38:32.108772   52527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-656945:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -I lz4 -xf /preloaded.tar -C /extractDir
	I1103 20:38:37.519693   52527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-656945:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -I lz4 -xf /preloaded.tar -C /extractDir: (5.410874043s)
	I1103 20:38:37.519730   52527 kic.go:203] duration metric: took 5.411016 seconds to extract preloaded images to volume
	W1103 20:38:37.519843   52527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1103 20:38:37.519929   52527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1103 20:38:37.569795   52527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-656945 --name ingress-addon-legacy-656945 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-656945 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-656945 --network ingress-addon-legacy-656945 --ip 192.168.49.2 --volume ingress-addon-legacy-656945:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89
	I1103 20:38:37.881006   52527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-656945 --format={{.State.Running}}
	I1103 20:38:37.897442   52527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-656945 --format={{.State.Status}}
	I1103 20:38:37.914526   52527 cli_runner.go:164] Run: docker exec ingress-addon-legacy-656945 stat /var/lib/dpkg/alternatives/iptables
	I1103 20:38:37.978266   52527 oci.go:144] the created container "ingress-addon-legacy-656945" has a running status.
	I1103 20:38:37.978307   52527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa...
	I1103 20:38:38.117331   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1103 20:38:38.117370   52527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1103 20:38:38.136397   52527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-656945 --format={{.State.Status}}
	I1103 20:38:38.152093   52527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1103 20:38:38.152115   52527 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-656945 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1103 20:38:38.222861   52527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-656945 --format={{.State.Status}}
	I1103 20:38:38.242721   52527 machine.go:88] provisioning docker machine ...
	I1103 20:38:38.242758   52527 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-656945"
	I1103 20:38:38.242820   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:38:38.262301   52527 main.go:141] libmachine: Using SSH client type: native
	I1103 20:38:38.262671   52527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1103 20:38:38.262692   52527 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-656945 && echo "ingress-addon-legacy-656945" | sudo tee /etc/hostname
	I1103 20:38:38.263296   52527 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51210->127.0.0.1:32787: read: connection reset by peer
	I1103 20:38:41.390231   52527 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-656945
	
	I1103 20:38:41.390331   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:38:41.407578   52527 main.go:141] libmachine: Using SSH client type: native
	I1103 20:38:41.407895   52527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1103 20:38:41.407915   52527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-656945' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-656945/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-656945' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1103 20:38:41.524306   52527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1103 20:38:41.524344   52527 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17545-5130/.minikube CaCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17545-5130/.minikube}
	I1103 20:38:41.524369   52527 ubuntu.go:177] setting up certificates
	I1103 20:38:41.524382   52527 provision.go:83] configureAuth start
	I1103 20:38:41.524472   52527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-656945
	I1103 20:38:41.539790   52527 provision.go:138] copyHostCerts
	I1103 20:38:41.539828   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem
	I1103 20:38:41.539866   52527 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem, removing ...
	I1103 20:38:41.539875   52527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem
	I1103 20:38:41.539950   52527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem (1082 bytes)
	I1103 20:38:41.540034   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem
	I1103 20:38:41.540059   52527 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem, removing ...
	I1103 20:38:41.540067   52527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem
	I1103 20:38:41.540095   52527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem (1123 bytes)
	I1103 20:38:41.540159   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem
	I1103 20:38:41.540180   52527 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem, removing ...
	I1103 20:38:41.540190   52527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem
	I1103 20:38:41.540222   52527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem (1679 bytes)
	I1103 20:38:41.540294   52527 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-656945 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-656945]
	I1103 20:38:41.765106   52527 provision.go:172] copyRemoteCerts
	I1103 20:38:41.765161   52527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1103 20:38:41.765195   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:38:41.781152   52527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa Username:docker}
	I1103 20:38:41.868279   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1103 20:38:41.868342   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1103 20:38:41.888724   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1103 20:38:41.888878   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1103 20:38:41.908887   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1103 20:38:41.908946   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1103 20:38:41.928704   52527 provision.go:86] duration metric: configureAuth took 404.311203ms
	I1103 20:38:41.928731   52527 ubuntu.go:193] setting minikube options for container-runtime
	I1103 20:38:41.928944   52527 config.go:182] Loaded profile config "ingress-addon-legacy-656945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1103 20:38:41.929051   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:38:41.944389   52527 main.go:141] libmachine: Using SSH client type: native
	I1103 20:38:41.944835   52527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1103 20:38:41.944895   52527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1103 20:38:42.167737   52527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1103 20:38:42.167760   52527 machine.go:91] provisioned docker machine in 3.925017531s
	I1103 20:38:42.167770   52527 client.go:171] LocalClient.Create took 11.893285429s
	I1103 20:38:42.167790   52527 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-656945" took 11.893335632s
	I1103 20:38:42.167799   52527 start.go:300] post-start starting for "ingress-addon-legacy-656945" (driver="docker")
	I1103 20:38:42.167810   52527 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1103 20:38:42.167861   52527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1103 20:38:42.167906   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:38:42.184068   52527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa Username:docker}
	I1103 20:38:42.272696   52527 ssh_runner.go:195] Run: cat /etc/os-release
	I1103 20:38:42.275539   52527 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1103 20:38:42.275570   52527 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1103 20:38:42.275578   52527 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1103 20:38:42.275584   52527 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1103 20:38:42.275593   52527 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/addons for local assets ...
	I1103 20:38:42.275645   52527 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/files for local assets ...
	I1103 20:38:42.275718   52527 filesync.go:149] local asset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> 118872.pem in /etc/ssl/certs
	I1103 20:38:42.275729   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> /etc/ssl/certs/118872.pem
	I1103 20:38:42.275818   52527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1103 20:38:42.283128   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem --> /etc/ssl/certs/118872.pem (1708 bytes)
	I1103 20:38:42.303215   52527 start.go:303] post-start completed in 135.404762ms
	I1103 20:38:42.303518   52527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-656945
	I1103 20:38:42.320327   52527 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/config.json ...
	I1103 20:38:42.320572   52527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1103 20:38:42.320607   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:38:42.336012   52527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa Username:docker}
	I1103 20:38:42.421131   52527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1103 20:38:42.424953   52527 start.go:128] duration metric: createHost completed in 12.152765446s
	I1103 20:38:42.424977   52527 start.go:83] releasing machines lock for "ingress-addon-legacy-656945", held for 12.152866615s
	I1103 20:38:42.425044   52527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-656945
	I1103 20:38:42.440525   52527 ssh_runner.go:195] Run: cat /version.json
	I1103 20:38:42.440561   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:38:42.440598   52527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1103 20:38:42.440664   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:38:42.456087   52527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa Username:docker}
	I1103 20:38:42.457521   52527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa Username:docker}
	I1103 20:38:42.539637   52527 ssh_runner.go:195] Run: systemctl --version
	I1103 20:38:42.629465   52527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1103 20:38:42.765112   52527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1103 20:38:42.769246   52527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:38:42.785614   52527 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1103 20:38:42.785674   52527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:38:42.810742   52527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1103 20:38:42.810766   52527 start.go:472] detecting cgroup driver to use...
	I1103 20:38:42.810798   52527 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1103 20:38:42.810849   52527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1103 20:38:42.824238   52527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1103 20:38:42.833600   52527 docker.go:203] disabling cri-docker service (if available) ...
	I1103 20:38:42.833642   52527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1103 20:38:42.844573   52527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1103 20:38:42.855993   52527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1103 20:38:42.929162   52527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1103 20:38:43.005515   52527 docker.go:219] disabling docker service ...
	I1103 20:38:43.005573   52527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1103 20:38:43.021914   52527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1103 20:38:43.031634   52527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1103 20:38:43.104596   52527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1103 20:38:43.181710   52527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1103 20:38:43.191228   52527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1103 20:38:43.204375   52527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1103 20:38:43.204443   52527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:38:43.212939   52527 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1103 20:38:43.212991   52527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:38:43.221500   52527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:38:43.229583   52527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:38:43.237801   52527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1103 20:38:43.245500   52527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1103 20:38:43.252723   52527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1103 20:38:43.259519   52527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1103 20:38:43.331819   52527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1103 20:38:43.434682   52527 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1103 20:38:43.434745   52527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1103 20:38:43.437882   52527 start.go:540] Will wait 60s for crictl version
	I1103 20:38:43.437924   52527 ssh_runner.go:195] Run: which crictl
	I1103 20:38:43.441030   52527 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1103 20:38:43.472737   52527 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1103 20:38:43.472822   52527 ssh_runner.go:195] Run: crio --version
	I1103 20:38:43.504779   52527 ssh_runner.go:195] Run: crio --version
	I1103 20:38:43.538935   52527 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1103 20:38:43.540351   52527 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-656945 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1103 20:38:43.555210   52527 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1103 20:38:43.558597   52527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1103 20:38:43.568086   52527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1103 20:38:43.568138   52527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1103 20:38:43.611624   52527 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1103 20:38:43.611682   52527 ssh_runner.go:195] Run: which lz4
	I1103 20:38:43.614941   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1103 20:38:43.615024   52527 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1103 20:38:43.618134   52527 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1103 20:38:43.618156   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1103 20:38:44.513185   52527 crio.go:444] Took 0.898184 seconds to copy over tarball
	I1103 20:38:44.513271   52527 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1103 20:38:46.803996   52527 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.290683568s)
	I1103 20:38:46.804030   52527 crio.go:451] Took 2.290820 seconds to extract the tarball
	I1103 20:38:46.804042   52527 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1103 20:38:46.873054   52527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1103 20:38:46.903208   52527 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1103 20:38:46.903232   52527 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1103 20:38:46.903273   52527 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1103 20:38:46.903290   52527 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1103 20:38:46.903329   52527 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1103 20:38:46.903355   52527 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1103 20:38:46.903381   52527 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1103 20:38:46.903360   52527 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1103 20:38:46.903334   52527 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1103 20:38:46.903379   52527 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1103 20:38:46.904406   52527 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1103 20:38:46.904413   52527 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1103 20:38:46.904415   52527 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1103 20:38:46.904441   52527 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1103 20:38:46.904462   52527 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1103 20:38:46.904507   52527 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1103 20:38:46.904518   52527 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1103 20:38:46.904523   52527 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1103 20:38:47.082980   52527 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1103 20:38:47.091201   52527 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1103 20:38:47.100367   52527 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1103 20:38:47.132141   52527 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1103 20:38:47.144155   52527 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1103 20:38:47.175965   52527 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1103 20:38:47.207334   52527 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1103 20:38:47.219245   52527 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1103 20:38:47.219289   52527 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1103 20:38:47.219316   52527 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1103 20:38:47.219328   52527 ssh_runner.go:195] Run: which crictl
	I1103 20:38:47.219354   52527 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1103 20:38:47.219396   52527 ssh_runner.go:195] Run: which crictl
	I1103 20:38:47.219405   52527 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1103 20:38:47.219431   52527 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1103 20:38:47.219460   52527 ssh_runner.go:195] Run: which crictl
	I1103 20:38:47.219471   52527 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1103 20:38:47.219510   52527 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1103 20:38:47.219548   52527 ssh_runner.go:195] Run: which crictl
	I1103 20:38:47.224627   52527 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1103 20:38:47.224658   52527 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1103 20:38:47.224696   52527 ssh_runner.go:195] Run: which crictl
	I1103 20:38:47.243874   52527 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1103 20:38:47.243912   52527 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1103 20:38:47.243934   52527 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1103 20:38:47.243942   52527 ssh_runner.go:195] Run: which crictl
	I1103 20:38:47.244000   52527 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1103 20:38:47.244023   52527 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1103 20:38:47.244031   52527 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1103 20:38:47.244082   52527 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1103 20:38:47.247404   52527 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1103 20:38:47.247865   52527 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1103 20:38:47.406984   52527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1103 20:38:47.407048   52527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1103 20:38:47.407058   52527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1103 20:38:47.407106   52527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1103 20:38:47.407153   52527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1103 20:38:47.409976   52527 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1103 20:38:47.410015   52527 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1103 20:38:47.410043   52527 ssh_runner.go:195] Run: which crictl
	I1103 20:38:47.410054   52527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1103 20:38:47.413003   52527 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1103 20:38:47.442993   52527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1103 20:38:47.443040   52527 cache_images.go:92] LoadImages completed in 539.796239ms
	W1103 20:38:47.443108   52527 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1103 20:38:47.443187   52527 ssh_runner.go:195] Run: crio config
	I1103 20:38:47.509425   52527 cni.go:84] Creating CNI manager for ""
	I1103 20:38:47.509445   52527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1103 20:38:47.509464   52527 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1103 20:38:47.509492   52527 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-656945 NodeName:ingress-addon-legacy-656945 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1103 20:38:47.509645   52527 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-656945"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1103 20:38:47.509734   52527 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-656945 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-656945 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1103 20:38:47.509794   52527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1103 20:38:47.517970   52527 binaries.go:44] Found k8s binaries, skipping transfer
	I1103 20:38:47.518042   52527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1103 20:38:47.525446   52527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1103 20:38:47.540044   52527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1103 20:38:47.554635   52527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1103 20:38:47.569438   52527 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1103 20:38:47.572352   52527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1103 20:38:47.582157   52527 certs.go:56] Setting up /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945 for IP: 192.168.49.2
	I1103 20:38:47.582180   52527 certs.go:190] acquiring lock for shared ca certs: {Name:mk18b7761724bd0081d8ca2b791d44e447ae6553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:38:47.582300   52527 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key
	I1103 20:38:47.582340   52527 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key
	I1103 20:38:47.582377   52527 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.key
	I1103 20:38:47.582389   52527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt with IP's: []
	I1103 20:38:47.808926   52527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt ...
	I1103 20:38:47.808957   52527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: {Name:mkd043d7a4ed71dd95b5d84589ccb62e833f9597 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:38:47.809122   52527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.key ...
	I1103 20:38:47.809136   52527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.key: {Name:mk17c69bdd8ce5b0404a53c236796ecb3fa7c518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:38:47.809199   52527 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.key.dd3b5fb2
	I1103 20:38:47.809218   52527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1103 20:38:47.995533   52527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.crt.dd3b5fb2 ...
	I1103 20:38:47.995563   52527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.crt.dd3b5fb2: {Name:mk8c12ac2c7641e3d764edcc5a628b2484e3f033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:38:47.995708   52527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.key.dd3b5fb2 ...
	I1103 20:38:47.995721   52527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.key.dd3b5fb2: {Name:mk839a2e8750646da204c2465476349487914942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:38:47.995780   52527 certs.go:337] copying /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.crt
	I1103 20:38:47.995856   52527 certs.go:341] copying /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.key
	I1103 20:38:47.995923   52527 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.key
	I1103 20:38:47.995940   52527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.crt with IP's: []
	I1103 20:38:48.183823   52527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.crt ...
	I1103 20:38:48.183853   52527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.crt: {Name:mk903dbdd669d75f32749b0c74783be40f5e7baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:38:48.184003   52527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.key ...
	I1103 20:38:48.184016   52527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.key: {Name:mkbaade43ad031e8e6ed9a04dc53fa5552c4cf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:38:48.184079   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1103 20:38:48.184096   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1103 20:38:48.184105   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1103 20:38:48.184121   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1103 20:38:48.184133   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1103 20:38:48.184143   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1103 20:38:48.184155   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1103 20:38:48.184167   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1103 20:38:48.184217   52527 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem (1338 bytes)
	W1103 20:38:48.184252   52527 certs.go:433] ignoring /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887_empty.pem, impossibly tiny 0 bytes
	I1103 20:38:48.184262   52527 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem (1675 bytes)
	I1103 20:38:48.184296   52527 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem (1082 bytes)
	I1103 20:38:48.184323   52527 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem (1123 bytes)
	I1103 20:38:48.184346   52527 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem (1679 bytes)
	I1103 20:38:48.184390   52527 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem (1708 bytes)
	I1103 20:38:48.184417   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> /usr/share/ca-certificates/118872.pem
	I1103 20:38:48.184449   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:38:48.184461   52527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem -> /usr/share/ca-certificates/11887.pem
	I1103 20:38:48.185042   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1103 20:38:48.206721   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1103 20:38:48.227341   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1103 20:38:48.247307   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1103 20:38:48.266980   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1103 20:38:48.286283   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1103 20:38:48.305291   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1103 20:38:48.324658   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1103 20:38:48.344187   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem --> /usr/share/ca-certificates/118872.pem (1708 bytes)
	I1103 20:38:48.363483   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1103 20:38:48.382681   52527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem --> /usr/share/ca-certificates/11887.pem (1338 bytes)
	I1103 20:38:48.401517   52527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1103 20:38:48.415796   52527 ssh_runner.go:195] Run: openssl version
	I1103 20:38:48.420538   52527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118872.pem && ln -fs /usr/share/ca-certificates/118872.pem /etc/ssl/certs/118872.pem"
	I1103 20:38:48.428293   52527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118872.pem
	I1103 20:38:48.431260   52527 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  3 20:35 /usr/share/ca-certificates/118872.pem
	I1103 20:38:48.431312   52527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118872.pem
	I1103 20:38:48.437324   52527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118872.pem /etc/ssl/certs/3ec20f2e.0"
	I1103 20:38:48.445106   52527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1103 20:38:48.452577   52527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:38:48.455432   52527 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  3 20:29 /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:38:48.455467   52527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:38:48.461411   52527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1103 20:38:48.468956   52527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11887.pem && ln -fs /usr/share/ca-certificates/11887.pem /etc/ssl/certs/11887.pem"
	I1103 20:38:48.476458   52527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11887.pem
	I1103 20:38:48.479266   52527 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  3 20:35 /usr/share/ca-certificates/11887.pem
	I1103 20:38:48.479293   52527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11887.pem
	I1103 20:38:48.485024   52527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11887.pem /etc/ssl/certs/51391683.0"
	I1103 20:38:48.492405   52527 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1103 20:38:48.495204   52527 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1103 20:38:48.495246   52527 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-656945 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-656945 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:38:48.495300   52527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1103 20:38:48.495343   52527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1103 20:38:48.526345   52527 cri.go:89] found id: ""
	I1103 20:38:48.526405   52527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1103 20:38:48.534211   52527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1103 20:38:48.541596   52527 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1103 20:38:48.541645   52527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1103 20:38:48.548734   52527 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1103 20:38:48.548764   52527 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1103 20:38:48.589501   52527 kubeadm.go:322] W1103 20:38:48.588997    1378 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1103 20:38:48.626348   52527 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1103 20:38:48.691875   52527 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1103 20:38:51.298931   52527 kubeadm.go:322] W1103 20:38:51.298532    1378 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1103 20:38:51.300251   52527 kubeadm.go:322] W1103 20:38:51.299929    1378 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1103 20:38:58.753146   52527 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1103 20:38:58.753207   52527 kubeadm.go:322] [preflight] Running pre-flight checks
	I1103 20:38:58.753319   52527 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1103 20:38:58.753429   52527 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1103 20:38:58.753485   52527 kubeadm.go:322] OS: Linux
	I1103 20:38:58.753532   52527 kubeadm.go:322] CGROUPS_CPU: enabled
	I1103 20:38:58.753583   52527 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1103 20:38:58.753649   52527 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1103 20:38:58.753725   52527 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1103 20:38:58.753806   52527 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1103 20:38:58.753850   52527 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1103 20:38:58.753910   52527 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1103 20:38:58.753991   52527 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1103 20:38:58.754070   52527 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1103 20:38:58.754156   52527 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1103 20:38:58.754225   52527 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1103 20:38:58.754267   52527 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1103 20:38:58.754330   52527 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1103 20:38:58.756803   52527 out.go:204]   - Generating certificates and keys ...
	I1103 20:38:58.756908   52527 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1103 20:38:58.756986   52527 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1103 20:38:58.757056   52527 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1103 20:38:58.757108   52527 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1103 20:38:58.757158   52527 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1103 20:38:58.757200   52527 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1103 20:38:58.757249   52527 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1103 20:38:58.757357   52527 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-656945 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1103 20:38:58.757402   52527 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1103 20:38:58.757508   52527 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-656945 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1103 20:38:58.757577   52527 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1103 20:38:58.757639   52527 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1103 20:38:58.757676   52527 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1103 20:38:58.757722   52527 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1103 20:38:58.757787   52527 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1103 20:38:58.757837   52527 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1103 20:38:58.757899   52527 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1103 20:38:58.757952   52527 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1103 20:38:58.758015   52527 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1103 20:38:58.759265   52527 out.go:204]   - Booting up control plane ...
	I1103 20:38:58.759336   52527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1103 20:38:58.759398   52527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1103 20:38:58.759454   52527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1103 20:38:58.759535   52527 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1103 20:38:58.759690   52527 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1103 20:38:58.759768   52527 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002019 seconds
	I1103 20:38:58.759871   52527 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1103 20:38:58.759981   52527 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1103 20:38:58.760030   52527 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1103 20:38:58.760138   52527 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-656945 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1103 20:38:58.760198   52527 kubeadm.go:322] [bootstrap-token] Using token: oeld3k.y8haxbx2x2oxrzja
	I1103 20:38:58.761788   52527 out.go:204]   - Configuring RBAC rules ...
	I1103 20:38:58.761871   52527 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1103 20:38:58.761943   52527 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1103 20:38:58.762059   52527 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1103 20:38:58.762182   52527 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1103 20:38:58.762305   52527 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1103 20:38:58.762397   52527 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1103 20:38:58.762505   52527 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1103 20:38:58.762579   52527 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1103 20:38:58.762628   52527 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1103 20:38:58.762635   52527 kubeadm.go:322] 
	I1103 20:38:58.762689   52527 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1103 20:38:58.762697   52527 kubeadm.go:322] 
	I1103 20:38:58.762762   52527 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1103 20:38:58.762769   52527 kubeadm.go:322] 
	I1103 20:38:58.762790   52527 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1103 20:38:58.762844   52527 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1103 20:38:58.762889   52527 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1103 20:38:58.762895   52527 kubeadm.go:322] 
	I1103 20:38:58.762936   52527 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1103 20:38:58.763000   52527 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1103 20:38:58.763056   52527 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1103 20:38:58.763062   52527 kubeadm.go:322] 
	I1103 20:38:58.763131   52527 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1103 20:38:58.763197   52527 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1103 20:38:58.763202   52527 kubeadm.go:322] 
	I1103 20:38:58.763270   52527 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token oeld3k.y8haxbx2x2oxrzja \
	I1103 20:38:58.763363   52527 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df \
	I1103 20:38:58.763385   52527 kubeadm.go:322]     --control-plane 
	I1103 20:38:58.763388   52527 kubeadm.go:322] 
	I1103 20:38:58.763460   52527 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1103 20:38:58.763469   52527 kubeadm.go:322] 
	I1103 20:38:58.763533   52527 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token oeld3k.y8haxbx2x2oxrzja \
	I1103 20:38:58.763628   52527 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df 
	I1103 20:38:58.763640   52527 cni.go:84] Creating CNI manager for ""
	I1103 20:38:58.763646   52527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1103 20:38:58.765326   52527 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1103 20:38:58.766963   52527 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1103 20:38:58.770549   52527 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1103 20:38:58.770571   52527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1103 20:38:58.785974   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1103 20:38:59.176049   52527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1103 20:38:59.176118   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:38:59.176118   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=44765b58c8440feed3c9edc110a2d06dc722956e minikube.k8s.io/name=ingress-addon-legacy-656945 minikube.k8s.io/updated_at=2023_11_03T20_38_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:38:59.182877   52527 ops.go:34] apiserver oom_adj: -16
	I1103 20:38:59.245445   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:38:59.349392   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:38:59.915714   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:00.415651   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:00.915165   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:01.415195   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:01.915729   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:02.415709   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:02.915929   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:03.416030   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:03.915194   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:04.415056   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:04.915672   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:05.416048   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:05.915616   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:06.415142   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:06.915933   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:07.415257   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:07.915762   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:08.415240   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:08.915079   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:09.415451   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:09.915099   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:10.415769   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:10.915205   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:11.415334   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:11.915278   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:12.415082   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:12.915260   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:13.415533   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:13.915164   52527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:39:13.990922   52527 kubeadm.go:1081] duration metric: took 14.814855998s to wait for elevateKubeSystemPrivileges.
	I1103 20:39:13.990961   52527 kubeadm.go:406] StartCluster complete in 25.495715932s
	I1103 20:39:13.991018   52527 settings.go:142] acquiring lock: {Name:mk78e85fd384b188b08ef0a94e618db15bb45e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:39:13.991092   52527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:39:13.991839   52527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/kubeconfig: {Name:mk13adb0876366d94fd82a065912fb44eee0cd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:39:13.992047   52527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1103 20:39:13.992084   52527 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1103 20:39:13.992163   52527 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-656945"
	I1103 20:39:13.992186   52527 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-656945"
	I1103 20:39:13.992195   52527 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-656945"
	I1103 20:39:13.992209   52527 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-656945"
	I1103 20:39:13.992244   52527 host.go:66] Checking if "ingress-addon-legacy-656945" exists ...
	I1103 20:39:13.992293   52527 config.go:182] Loaded profile config "ingress-addon-legacy-656945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1103 20:39:13.992638   52527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-656945 --format={{.State.Status}}
	I1103 20:39:13.992810   52527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-656945 --format={{.State.Status}}
	I1103 20:39:13.992727   52527 kapi.go:59] client config for ingress-addon-legacy-656945: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt", KeyFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.key", CAFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bb20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1103 20:39:13.993556   52527 cert_rotation.go:137] Starting client certificate rotation controller
	I1103 20:39:14.014864   52527 kapi.go:59] client config for ingress-addon-legacy-656945: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt", KeyFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.key", CAFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bb20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1103 20:39:14.015199   52527 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-656945"
	I1103 20:39:14.015241   52527 host.go:66] Checking if "ingress-addon-legacy-656945" exists ...
	I1103 20:39:14.017290   52527 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1103 20:39:14.015808   52527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-656945 --format={{.State.Status}}
	I1103 20:39:14.017104   52527 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-656945" context rescaled to 1 replicas
	I1103 20:39:14.018930   52527 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1103 20:39:14.020773   52527 out.go:177] * Verifying Kubernetes components...
	I1103 20:39:14.019021   52527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1103 20:39:14.022263   52527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1103 20:39:14.022289   52527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:39:14.022308   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:39:14.044119   52527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa Username:docker}
	I1103 20:39:14.047364   52527 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1103 20:39:14.047382   52527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1103 20:39:14.047427   52527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-656945
	I1103 20:39:14.062995   52527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/ingress-addon-legacy-656945/id_rsa Username:docker}
	I1103 20:39:14.201011   52527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1103 20:39:14.201602   52527 kapi.go:59] client config for ingress-addon-legacy-656945: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt", KeyFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.key", CAFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bb20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1103 20:39:14.201833   52527 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-656945" to be "Ready" ...
	I1103 20:39:14.207945   52527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1103 20:39:14.307736   52527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1103 20:39:14.631480   52527 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1103 20:39:14.737163   52527 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1103 20:39:14.738543   52527 addons.go:502] enable addons completed in 746.464613ms: enabled=[storage-provisioner default-storageclass]
	I1103 20:39:16.211467   52527 node_ready.go:58] node "ingress-addon-legacy-656945" has status "Ready":"False"
	I1103 20:39:18.710775   52527 node_ready.go:58] node "ingress-addon-legacy-656945" has status "Ready":"False"
	I1103 20:39:19.210767   52527 node_ready.go:49] node "ingress-addon-legacy-656945" has status "Ready":"True"
	I1103 20:39:19.210791   52527 node_ready.go:38] duration metric: took 5.008933105s waiting for node "ingress-addon-legacy-656945" to be "Ready" ...
	I1103 20:39:19.210799   52527 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1103 20:39:19.217124   52527 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-hlc42" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:21.224090   52527 pod_ready.go:102] pod "coredns-66bff467f8-hlc42" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-03 20:39:13 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1103 20:39:23.224872   52527 pod_ready.go:102] pod "coredns-66bff467f8-hlc42" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-03 20:39:13 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1103 20:39:25.226899   52527 pod_ready.go:102] pod "coredns-66bff467f8-hlc42" in "kube-system" namespace has status "Ready":"False"
	I1103 20:39:27.726418   52527 pod_ready.go:102] pod "coredns-66bff467f8-hlc42" in "kube-system" namespace has status "Ready":"False"
	I1103 20:39:29.726218   52527 pod_ready.go:92] pod "coredns-66bff467f8-hlc42" in "kube-system" namespace has status "Ready":"True"
	I1103 20:39:29.726244   52527 pod_ready.go:81] duration metric: took 10.509100153s waiting for pod "coredns-66bff467f8-hlc42" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.726253   52527 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-656945" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.729923   52527 pod_ready.go:92] pod "etcd-ingress-addon-legacy-656945" in "kube-system" namespace has status "Ready":"True"
	I1103 20:39:29.729945   52527 pod_ready.go:81] duration metric: took 3.685686ms waiting for pod "etcd-ingress-addon-legacy-656945" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.729956   52527 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-656945" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.733479   52527 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-656945" in "kube-system" namespace has status "Ready":"True"
	I1103 20:39:29.733504   52527 pod_ready.go:81] duration metric: took 3.541354ms waiting for pod "kube-apiserver-ingress-addon-legacy-656945" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.733512   52527 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-656945" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.737042   52527 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-656945" in "kube-system" namespace has status "Ready":"True"
	I1103 20:39:29.737058   52527 pod_ready.go:81] duration metric: took 3.540893ms waiting for pod "kube-controller-manager-ingress-addon-legacy-656945" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.737066   52527 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ds9d4" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.740511   52527 pod_ready.go:92] pod "kube-proxy-ds9d4" in "kube-system" namespace has status "Ready":"True"
	I1103 20:39:29.740526   52527 pod_ready.go:81] duration metric: took 3.455405ms waiting for pod "kube-proxy-ds9d4" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.740538   52527 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-656945" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:29.922007   52527 request.go:629] Waited for 181.357122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-656945
	I1103 20:39:30.121822   52527 request.go:629] Waited for 197.177933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-656945
	I1103 20:39:30.124492   52527 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-656945" in "kube-system" namespace has status "Ready":"True"
	I1103 20:39:30.124511   52527 pod_ready.go:81] duration metric: took 383.966933ms waiting for pod "kube-scheduler-ingress-addon-legacy-656945" in "kube-system" namespace to be "Ready" ...
	I1103 20:39:30.124527   52527 pod_ready.go:38] duration metric: took 10.913713548s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1103 20:39:30.124541   52527 api_server.go:52] waiting for apiserver process to appear ...
	I1103 20:39:30.124597   52527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1103 20:39:30.135039   52527 api_server.go:72] duration metric: took 16.116061544s to wait for apiserver process to appear ...
	I1103 20:39:30.135064   52527 api_server.go:88] waiting for apiserver healthz status ...
	I1103 20:39:30.135084   52527 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1103 20:39:30.139549   52527 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1103 20:39:30.140342   52527 api_server.go:141] control plane version: v1.18.20
	I1103 20:39:30.140365   52527 api_server.go:131] duration metric: took 5.293685ms to wait for apiserver health ...
	I1103 20:39:30.140375   52527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1103 20:39:30.322686   52527 request.go:629] Waited for 182.249387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1103 20:39:30.328182   52527 system_pods.go:59] 8 kube-system pods found
	I1103 20:39:30.328216   52527 system_pods.go:61] "coredns-66bff467f8-hlc42" [6630e399-0206-4e54-9ebf-943c782b1210] Running
	I1103 20:39:30.328223   52527 system_pods.go:61] "etcd-ingress-addon-legacy-656945" [063bd243-9444-474f-9a10-a9bf01ec411d] Running
	I1103 20:39:30.328230   52527 system_pods.go:61] "kindnet-xzl4h" [2bf4a84d-e271-4895-a435-833da3a18c4c] Running
	I1103 20:39:30.328234   52527 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-656945" [92697f5e-2a5d-4699-934c-f7f0278de5bc] Running
	I1103 20:39:30.328238   52527 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-656945" [d8885506-cd9b-46df-9097-593cde71bcc9] Running
	I1103 20:39:30.328242   52527 system_pods.go:61] "kube-proxy-ds9d4" [32783b26-74af-45fa-b25a-6c7ed56f5798] Running
	I1103 20:39:30.328247   52527 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-656945" [da9900d0-e2c9-4d7a-9793-240e8025cc33] Running
	I1103 20:39:30.328251   52527 system_pods.go:61] "storage-provisioner" [9f65ca5c-3641-4a9c-b4f8-4d28d375f42c] Running
	I1103 20:39:30.328259   52527 system_pods.go:74] duration metric: took 187.875208ms to wait for pod list to return data ...
	I1103 20:39:30.328272   52527 default_sa.go:34] waiting for default service account to be created ...
	I1103 20:39:30.522702   52527 request.go:629] Waited for 194.353172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1103 20:39:30.525071   52527 default_sa.go:45] found service account: "default"
	I1103 20:39:30.525101   52527 default_sa.go:55] duration metric: took 196.818335ms for default service account to be created ...
	I1103 20:39:30.525111   52527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1103 20:39:30.722570   52527 request.go:629] Waited for 197.351958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1103 20:39:30.727654   52527 system_pods.go:86] 8 kube-system pods found
	I1103 20:39:30.727682   52527 system_pods.go:89] "coredns-66bff467f8-hlc42" [6630e399-0206-4e54-9ebf-943c782b1210] Running
	I1103 20:39:30.727694   52527 system_pods.go:89] "etcd-ingress-addon-legacy-656945" [063bd243-9444-474f-9a10-a9bf01ec411d] Running
	I1103 20:39:30.727700   52527 system_pods.go:89] "kindnet-xzl4h" [2bf4a84d-e271-4895-a435-833da3a18c4c] Running
	I1103 20:39:30.727706   52527 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-656945" [92697f5e-2a5d-4699-934c-f7f0278de5bc] Running
	I1103 20:39:30.727716   52527 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-656945" [d8885506-cd9b-46df-9097-593cde71bcc9] Running
	I1103 20:39:30.727726   52527 system_pods.go:89] "kube-proxy-ds9d4" [32783b26-74af-45fa-b25a-6c7ed56f5798] Running
	I1103 20:39:30.727734   52527 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-656945" [da9900d0-e2c9-4d7a-9793-240e8025cc33] Running
	I1103 20:39:30.727742   52527 system_pods.go:89] "storage-provisioner" [9f65ca5c-3641-4a9c-b4f8-4d28d375f42c] Running
	I1103 20:39:30.727752   52527 system_pods.go:126] duration metric: took 202.632435ms to wait for k8s-apps to be running ...
	I1103 20:39:30.727767   52527 system_svc.go:44] waiting for kubelet service to be running ....
	I1103 20:39:30.727820   52527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:39:30.738803   52527 system_svc.go:56] duration metric: took 11.028582ms WaitForService to wait for kubelet.
	I1103 20:39:30.738824   52527 kubeadm.go:581] duration metric: took 16.719852773s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1103 20:39:30.738847   52527 node_conditions.go:102] verifying NodePressure condition ...
	I1103 20:39:30.922251   52527 request.go:629] Waited for 183.317018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1103 20:39:30.925007   52527 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1103 20:39:30.925029   52527 node_conditions.go:123] node cpu capacity is 8
	I1103 20:39:30.925040   52527 node_conditions.go:105] duration metric: took 186.18909ms to run NodePressure ...
	I1103 20:39:30.925051   52527 start.go:228] waiting for startup goroutines ...
	I1103 20:39:30.925057   52527 start.go:233] waiting for cluster config update ...
	I1103 20:39:30.925066   52527 start.go:242] writing updated cluster config ...
	I1103 20:39:30.925308   52527 ssh_runner.go:195] Run: rm -f paused
	I1103 20:39:30.972096   52527 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1103 20:39:30.974072   52527 out.go:177] 
	W1103 20:39:30.975663   52527 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1103 20:39:30.977302   52527 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1103 20:39:30.978740   52527 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-656945" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 03 20:42:30 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:30.907034383Z" level=info msg="Stopping pod sandbox: bbd71cde016644935ae941403d51a362f20e42ba0502924d0fa6844e48249484" id=a66e0f3f-0a1b-40f8-9e79-b6c31daf7198 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:30 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:30.908321061Z" level=info msg="Stopped pod sandbox: bbd71cde016644935ae941403d51a362f20e42ba0502924d0fa6844e48249484" id=a66e0f3f-0a1b-40f8-9e79-b6c31daf7198 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:31 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:31.243138468Z" level=info msg="Stopping pod sandbox: bbd71cde016644935ae941403d51a362f20e42ba0502924d0fa6844e48249484" id=3806c389-e0f4-40c4-bafb-b868f5687176 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:31 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:31.243197374Z" level=info msg="Stopped pod sandbox (already stopped): bbd71cde016644935ae941403d51a362f20e42ba0502924d0fa6844e48249484" id=3806c389-e0f4-40c4-bafb-b868f5687176 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:31 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:31.977755040Z" level=info msg="Stopping container: cce10b14e05b102ccf799774d8914f995b1b26b018920d4ee3f06b7c2ffbc3d9 (timeout: 2s)" id=ccf7751c-2b40-4307-817e-2c195032cb0a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 03 20:42:31 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:31.979313491Z" level=info msg="Stopping container: cce10b14e05b102ccf799774d8914f995b1b26b018920d4ee3f06b7c2ffbc3d9 (timeout: 2s)" id=59900ea7-c849-4636-9120-315fd18a9b0b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 03 20:42:33 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:33.987480459Z" level=warning msg="Stopping container cce10b14e05b102ccf799774d8914f995b1b26b018920d4ee3f06b7c2ffbc3d9 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=ccf7751c-2b40-4307-817e-2c195032cb0a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 03 20:42:34 ingress-addon-legacy-656945 conmon[3479]: conmon cce10b14e05b102ccf79 <ninfo>: container 3491 exited with status 137
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.145510893Z" level=info msg="Stopped container cce10b14e05b102ccf799774d8914f995b1b26b018920d4ee3f06b7c2ffbc3d9: ingress-nginx/ingress-nginx-controller-7fcf777cb7-5ltkt/controller" id=59900ea7-c849-4636-9120-315fd18a9b0b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.145509718Z" level=info msg="Stopped container cce10b14e05b102ccf799774d8914f995b1b26b018920d4ee3f06b7c2ffbc3d9: ingress-nginx/ingress-nginx-controller-7fcf777cb7-5ltkt/controller" id=ccf7751c-2b40-4307-817e-2c195032cb0a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.146071352Z" level=info msg="Stopping pod sandbox: fe030154af880a46c7e519423f4b51509579f589ad4285521d4f5ac5402af0cd" id=221abc86-3008-444d-a3b5-dd0f72eb35e2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.146111263Z" level=info msg="Stopping pod sandbox: fe030154af880a46c7e519423f4b51509579f589ad4285521d4f5ac5402af0cd" id=8d7070c9-4920-4c75-a0d2-4c5d9d8d60d3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.148710840Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-N7GSVQYI55HOIBN6 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-CGZPFCV6PLLX6ZNU - [0:0]\n-X KUBE-HP-CGZPFCV6PLLX6ZNU\n-X KUBE-HP-N7GSVQYI55HOIBN6\nCOMMIT\n"
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.149932991Z" level=info msg="Closing host port tcp:80"
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.149964655Z" level=info msg="Closing host port tcp:443"
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.150854813Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.150876845Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.151012684Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-5ltkt Namespace:ingress-nginx ID:fe030154af880a46c7e519423f4b51509579f589ad4285521d4f5ac5402af0cd UID:50516847-8aef-4e1d-9bb2-9db85c6c64ae NetNS:/var/run/netns/82e13ff9-b670-45a7-ba0a-715cbd67b8c2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.151127145Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-5ltkt from CNI network \"kindnet\" (type=ptp)"
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.181551013Z" level=info msg="Stopped pod sandbox: fe030154af880a46c7e519423f4b51509579f589ad4285521d4f5ac5402af0cd" id=221abc86-3008-444d-a3b5-dd0f72eb35e2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.181642395Z" level=info msg="Stopped pod sandbox (already stopped): fe030154af880a46c7e519423f4b51509579f589ad4285521d4f5ac5402af0cd" id=8d7070c9-4920-4c75-a0d2-4c5d9d8d60d3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.906270317Z" level=info msg="Stopping container: cce10b14e05b102ccf799774d8914f995b1b26b018920d4ee3f06b7c2ffbc3d9 (timeout: 2s)" id=8215c6ff-2c79-4ddc-b21c-d01affd1456e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.909494726Z" level=info msg="Stopped container cce10b14e05b102ccf799774d8914f995b1b26b018920d4ee3f06b7c2ffbc3d9: ingress-nginx/ingress-nginx-controller-7fcf777cb7-5ltkt/controller" id=8215c6ff-2c79-4ddc-b21c-d01affd1456e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.909884326Z" level=info msg="Stopping pod sandbox: fe030154af880a46c7e519423f4b51509579f589ad4285521d4f5ac5402af0cd" id=f4b2437b-5f2d-49cd-b97e-ec78a233603c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 03 20:42:34 ingress-addon-legacy-656945 crio[958]: time="2023-11-03 20:42:34.909916869Z" level=info msg="Stopped pod sandbox (already stopped): fe030154af880a46c7e519423f4b51509579f589ad4285521d4f5ac5402af0cd" id=f4b2437b-5f2d-49cd-b97e-ec78a233603c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d633dd69e2e26       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            23 seconds ago      Running             hello-world-app           0                   8326cfe7e00ee       hello-world-app-5f5d8b66bb-gnx6h
	cd781d5c417df       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   ccf4e55637e82       nginx
	cce10b14e05b1       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   fe030154af880       ingress-nginx-controller-7fcf777cb7-5ltkt
	10f8c25af1977       a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8                                                   3 minutes ago       Exited              patch                     1                   eb274edc9085d       ingress-nginx-admission-patch-spnpg
	911bc4f841d6a       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   0570ae875f043       ingress-nginx-admission-create-x6vtz
	4fa77bf2af988       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   2db95f5b325e3       coredns-66bff467f8-hlc42
	3f722ade7fe86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   f47704943b524       storage-provisioner
	02c76e562e9cf       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   11ff1ffe4376f       kindnet-xzl4h
	00ca58f3a1899       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   c6e5d7283ad34       kube-proxy-ds9d4
	d4d92746e9a61       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   956d4e55270ec       kube-apiserver-ingress-addon-legacy-656945
	8e0e491ef111e       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   4cc71a9cffd15       etcd-ingress-addon-legacy-656945
	c154cce38f889       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   50c52d876916c       kube-controller-manager-ingress-addon-legacy-656945
	b999486cf09e0       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   c5ce8a9d15ed4       kube-scheduler-ingress-addon-legacy-656945
	
	* 
	* ==> coredns [4fa77bf2af9888222c93ed4bddbb5cb85530570a9e4ed6f6e021c8d69a85ec11] <==
	* [INFO] 10.244.0.5:56742 - 36096 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006808469s
	[INFO] 10.244.0.5:60568 - 30026 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006436741s
	[INFO] 10.244.0.5:56742 - 1356 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006426742s
	[INFO] 10.244.0.5:53494 - 27457 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006503313s
	[INFO] 10.244.0.5:44631 - 2808 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006659728s
	[INFO] 10.244.0.5:46297 - 56040 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006775718s
	[INFO] 10.244.0.5:43839 - 10551 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006538759s
	[INFO] 10.244.0.5:58468 - 54619 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006846993s
	[INFO] 10.244.0.5:59109 - 60604 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006895365s
	[INFO] 10.244.0.5:44631 - 5815 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00498251s
	[INFO] 10.244.0.5:46297 - 46062 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004997252s
	[INFO] 10.244.0.5:53494 - 14040 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005112736s
	[INFO] 10.244.0.5:58468 - 31026 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004892522s
	[INFO] 10.244.0.5:59109 - 13682 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005030025s
	[INFO] 10.244.0.5:43839 - 59146 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00513695s
	[INFO] 10.244.0.5:56742 - 39019 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005245197s
	[INFO] 10.244.0.5:44631 - 4506 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000175061s
	[INFO] 10.244.0.5:58468 - 17206 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065172s
	[INFO] 10.244.0.5:60568 - 12956 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005430366s
	[INFO] 10.244.0.5:43839 - 24473 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000043024s
	[INFO] 10.244.0.5:56742 - 33566 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000141123s
	[INFO] 10.244.0.5:53494 - 35704 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000327612s
	[INFO] 10.244.0.5:46297 - 6394 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000408578s
	[INFO] 10.244.0.5:60568 - 42395 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000152805s
	[INFO] 10.244.0.5:59109 - 20602 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006847s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-656945
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-656945
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=44765b58c8440feed3c9edc110a2d06dc722956e
	                    minikube.k8s.io/name=ingress-addon-legacy-656945
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_03T20_38_59_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Nov 2023 20:38:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-656945
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Nov 2023 20:42:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Nov 2023 20:42:29 +0000   Fri, 03 Nov 2023 20:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Nov 2023 20:42:29 +0000   Fri, 03 Nov 2023 20:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Nov 2023 20:42:29 +0000   Fri, 03 Nov 2023 20:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Nov 2023 20:42:29 +0000   Fri, 03 Nov 2023 20:39:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-656945
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2fef9537c2d4103a462dbae491a1789
	  System UUID:                493bacb7-2531-4653-b659-fc46ceee85d1
	  Boot ID:                    399e003d-4e5c-4eac-b4ee-6a616fb3f737
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-gnx6h                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-hlc42                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m26s
	  kube-system                 etcd-ingress-addon-legacy-656945                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kindnet-xzl4h                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m26s
	  kube-system                 kube-apiserver-ingress-addon-legacy-656945             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-656945    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-proxy-ds9d4                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 kube-scheduler-ingress-addon-legacy-656945             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m48s (x5 over 3m48s)  kubelet     Node ingress-addon-legacy-656945 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x4 over 3m48s)  kubelet     Node ingress-addon-legacy-656945 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x4 over 3m48s)  kubelet     Node ingress-addon-legacy-656945 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m41s                  kubelet     Node ingress-addon-legacy-656945 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m41s                  kubelet     Node ingress-addon-legacy-656945 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m41s                  kubelet     Node ingress-addon-legacy-656945 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m25s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m20s                  kubelet     Node ingress-addon-legacy-656945 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004971] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007941] FS-Cache: N-cookie d=00000000c241a6d9{9p.inode} n=0000000043748617
	[  +0.009108] FS-Cache: N-key=[8] '78a00f0200000000'
	[  +0.307015] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006750] FS-Cache: O-cookie d=00000000c241a6d9{9p.inode} n=00000000a199da0f
	[  +0.007353] FS-Cache: O-key=[8] '82a00f0200000000'
	[  +0.004923] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006579] FS-Cache: N-cookie d=00000000c241a6d9{9p.inode} n=00000000c0333615
	[  +0.008721] FS-Cache: N-key=[8] '82a00f0200000000'
	[Nov 3 20:38] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 3 20:40] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[  +1.004199] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[  +2.015806] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[  +4.159628] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[  +8.191169] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[ +16.126457] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[Nov 3 20:41] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	
	* 
	* ==> etcd [8e0e491ef111e5bc7013c0fcab9c80bcabb28590f3b565343ec95ac0e2a75509] <==
	* raft2023/11/03 20:38:52 INFO: aec36adc501070cc became follower at term 0
	raft2023/11/03 20:38:52 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/03 20:38:52 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/03 20:38:52 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-03 20:38:52.312948 W | auth: simple token is not cryptographically signed
	2023-11-03 20:38:52.315302 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-03 20:38:52.315432 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/03 20:38:52 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-03 20:38:52.316281 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-03 20:38:52.317749 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-03 20:38:52.317793 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-03 20:38:52.318069 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/03 20:38:52 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/03 20:38:52 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/03 20:38:52 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/03 20:38:52 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/03 20:38:52 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-03 20:38:52.808958 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-03 20:38:52.809089 I | etcdserver: published {Name:ingress-addon-legacy-656945 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-03 20:38:52.809155 I | embed: ready to serve client requests
	2023-11-03 20:38:52.809312 I | embed: ready to serve client requests
	2023-11-03 20:38:52.809697 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-03 20:38:52.810063 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-03 20:38:52.811665 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-03 20:38:52.811772 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  20:42:39 up 25 min,  0 users,  load average: 0.78, 0.67, 0.48
	Linux ingress-addon-legacy-656945 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [02c76e562e9cf4210873e6aedc9b3be4003d413db0459f4d53288faa1d41bd85] <==
	* I1103 20:40:37.464216       1 main.go:227] handling current node
	I1103 20:40:47.467818       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:40:47.467842       1 main.go:227] handling current node
	I1103 20:40:57.479530       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:40:57.479553       1 main.go:227] handling current node
	I1103 20:41:07.483185       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:41:07.483215       1 main.go:227] handling current node
	I1103 20:41:17.494759       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:41:17.494784       1 main.go:227] handling current node
	I1103 20:41:27.498790       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:41:27.498816       1 main.go:227] handling current node
	I1103 20:41:37.507730       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:41:37.507752       1 main.go:227] handling current node
	I1103 20:41:47.511561       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:41:47.511587       1 main.go:227] handling current node
	I1103 20:41:57.525530       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:41:57.525557       1 main.go:227] handling current node
	I1103 20:42:07.529550       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:42:07.529573       1 main.go:227] handling current node
	I1103 20:42:17.533730       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:42:17.533848       1 main.go:227] handling current node
	I1103 20:42:27.537795       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:42:27.537825       1 main.go:227] handling current node
	I1103 20:42:37.549857       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1103 20:42:37.549889       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [d4d92746e9a61f197411e02c21e1274a1d251c9de73254ca511c65af3e511d55] <==
	* I1103 20:38:55.926660       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1103 20:38:55.926730       1 cache.go:39] Caches are synced for autoregister controller
	I1103 20:38:55.926745       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1103 20:38:55.926857       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1103 20:38:55.927240       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1103 20:38:56.825922       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1103 20:38:56.825951       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1103 20:38:56.833911       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1103 20:38:56.836648       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1103 20:38:56.836668       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1103 20:38:57.098946       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1103 20:38:57.127499       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1103 20:38:57.215104       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1103 20:38:57.215883       1 controller.go:609] quota admission added evaluator for: endpoints
	I1103 20:38:57.218569       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1103 20:38:58.155321       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1103 20:38:58.586325       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1103 20:38:58.741743       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1103 20:38:58.893295       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1103 20:39:13.526699       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1103 20:39:13.612081       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1103 20:39:31.624747       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1103 20:39:54.227657       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1103 20:42:31.988798       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1103 20:42:33.124873       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [c154cce38f889b1d040c16d401c34297e847d503eb004dac5907d90c2976a977] <==
	* I1103 20:39:13.784689       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1103 20:39:13.784721       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-656945", UID:"ddcb3970-32a5-49af-ad79-30458e8cb6ad", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-656945 event: Registered Node ingress-addon-legacy-656945 in Controller
	I1103 20:39:13.871690       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1103 20:39:13.968247       1 shared_informer.go:230] Caches are synced for resource quota 
	I1103 20:39:14.016951       1 shared_informer.go:230] Caches are synced for resource quota 
	I1103 20:39:14.020617       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"91c9373f-6bfa-4421-8c87-f8a46b4e341a", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1103 20:39:14.028568       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1103 20:39:14.034045       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"3bfec9ea-cc5c-4c14-a8ed-978a274e4635", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-t8cjq
	I1103 20:39:14.091342       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1103 20:39:14.188576       1 shared_informer.go:230] Caches are synced for PV protection 
	I1103 20:39:14.188595       1 shared_informer.go:230] Caches are synced for attach detach 
	I1103 20:39:14.202943       1 shared_informer.go:230] Caches are synced for expand 
	I1103 20:39:14.204087       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1103 20:39:14.206589       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1103 20:39:14.206617       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1103 20:39:14.209229       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1103 20:39:23.785204       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1103 20:39:31.618337       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"4192a22d-ff15-422f-be88-d67152d83b1d", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1103 20:39:31.624673       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"afec635b-804b-4a7f-8a3e-e40033965253", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-5ltkt
	I1103 20:39:31.689071       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a6bd933b-f929-42e7-8209-cb408ff7d8ba", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-x6vtz
	I1103 20:39:31.698607       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e1a8fb45-0dcf-43f6-b9b8-3cf6b35f6eda", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-spnpg
	I1103 20:39:34.968476       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a6bd933b-f929-42e7-8209-cb408ff7d8ba", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1103 20:39:35.972996       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e1a8fb45-0dcf-43f6-b9b8-3cf6b35f6eda", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1103 20:42:14.400797       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"6af84034-94f9-496d-90f3-b7ed34b5fb6a", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1103 20:42:14.406939       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"fc06a106-806d-4a18-acfc-67be7fb33ef5", APIVersion:"apps/v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-gnx6h
	
	* 
	* ==> kube-proxy [00ca58f3a1899a4c764716599bafa108312b13fd6057bc34925d3f2e8809c349] <==
	* W1103 20:39:14.308160       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1103 20:39:14.316360       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1103 20:39:14.316394       1 server_others.go:186] Using iptables Proxier.
	I1103 20:39:14.317483       1 server.go:583] Version: v1.18.20
	I1103 20:39:14.318494       1 config.go:315] Starting service config controller
	I1103 20:39:14.318523       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1103 20:39:14.319351       1 config.go:133] Starting endpoints config controller
	I1103 20:39:14.319380       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1103 20:39:14.418720       1 shared_informer.go:230] Caches are synced for service config 
	I1103 20:39:14.419522       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b999486cf09e039028720f782eb153f998a6dfdf180845fe378d08ff185ae7b1] <==
	* I1103 20:38:55.901557       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1103 20:38:55.905041       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1103 20:38:55.905156       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1103 20:38:55.905960       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1103 20:38:55.906003       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1103 20:38:55.906599       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1103 20:38:55.907769       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1103 20:38:55.907803       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1103 20:38:55.907860       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1103 20:38:55.907972       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1103 20:38:55.908019       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1103 20:38:55.908066       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1103 20:38:55.908543       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1103 20:38:55.908761       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1103 20:38:55.909089       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1103 20:38:55.909156       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1103 20:38:55.909401       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1103 20:38:56.714832       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1103 20:38:56.757385       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1103 20:38:56.774896       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1103 20:38:56.787634       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1103 20:38:56.821233       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1103 20:38:56.890163       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1103 20:38:56.933756       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1103 20:38:57.405662       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Nov 03 20:41:56 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:41:56.906540    1859 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 03 20:41:56 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:41:56.906575    1859 pod_workers.go:191] Error syncing pod 9295dd7d-cc6f-47c5-965f-e991f0aad516 ("kube-ingress-dns-minikube_kube-system(9295dd7d-cc6f-47c5-965f-e991f0aad516)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 03 20:42:07 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:07.906381    1859 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 03 20:42:07 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:07.906426    1859 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 03 20:42:07 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:07.906470    1859 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 03 20:42:07 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:07.906496    1859 pod_workers.go:191] Error syncing pod 9295dd7d-cc6f-47c5-965f-e991f0aad516 ("kube-ingress-dns-minikube_kube-system(9295dd7d-cc6f-47c5-965f-e991f0aad516)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 03 20:42:14 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:14.411276    1859 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 03 20:42:14 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:14.593999    1859 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ss9bh" (UniqueName: "kubernetes.io/secret/eed3ccb3-2b40-4cc6-8512-371e7598c811-default-token-ss9bh") pod "hello-world-app-5f5d8b66bb-gnx6h" (UID: "eed3ccb3-2b40-4cc6-8512-371e7598c811")
	Nov 03 20:42:14 ingress-addon-legacy-656945 kubelet[1859]: W1103 20:42:14.761486    1859 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/8f5315efa5bf5f76f89badf80216a2f1bee1f04489ca68d1e7178de2fc941740/crio-8326cfe7e00ee438e626e51a0443a3bf90c6eb04f365f3b2a2c9ac2ae141fba0 WatchSource:0}: Error finding container 8326cfe7e00ee438e626e51a0443a3bf90c6eb04f365f3b2a2c9ac2ae141fba0: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc00080a660 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Nov 03 20:42:22 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:22.906414    1859 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 03 20:42:22 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:22.906459    1859 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 03 20:42:22 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:22.906523    1859 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 03 20:42:22 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:22.906557    1859 pod_workers.go:191] Error syncing pod 9295dd7d-cc6f-47c5-965f-e991f0aad516 ("kube-ingress-dns-minikube_kube-system(9295dd7d-cc6f-47c5-965f-e991f0aad516)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 03 20:42:30 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:30.228906    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-fq6dn" (UniqueName: "kubernetes.io/secret/9295dd7d-cc6f-47c5-965f-e991f0aad516-minikube-ingress-dns-token-fq6dn") pod "9295dd7d-cc6f-47c5-965f-e991f0aad516" (UID: "9295dd7d-cc6f-47c5-965f-e991f0aad516")
	Nov 03 20:42:30 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:30.230869    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9295dd7d-cc6f-47c5-965f-e991f0aad516-minikube-ingress-dns-token-fq6dn" (OuterVolumeSpecName: "minikube-ingress-dns-token-fq6dn") pod "9295dd7d-cc6f-47c5-965f-e991f0aad516" (UID: "9295dd7d-cc6f-47c5-965f-e991f0aad516"). InnerVolumeSpecName "minikube-ingress-dns-token-fq6dn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 03 20:42:30 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:30.329215    1859 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-fq6dn" (UniqueName: "kubernetes.io/secret/9295dd7d-cc6f-47c5-965f-e991f0aad516-minikube-ingress-dns-token-fq6dn") on node "ingress-addon-legacy-656945" DevicePath ""
	Nov 03 20:42:31 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:31.978623    1859 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5ltkt.179437a78050a3ee", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5ltkt", UID:"50516847-8aef-4e1d-9bb2-9db85c6c64ae", APIVersion:"v1", ResourceVersion:"475", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-656945"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14974adfa413dee, ext:213420628040, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14974adfa413dee, ext:213420628040, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5ltkt.179437a78050a3ee" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 03 20:42:31 ingress-addon-legacy-656945 kubelet[1859]: E1103 20:42:31.981841    1859 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5ltkt.179437a78050a3ee", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5ltkt", UID:"50516847-8aef-4e1d-9bb2-9db85c6c64ae", APIVersion:"v1", ResourceVersion:"475", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-656945"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14974adfa413dee, ext:213420628040, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14974adfa5bdc1f, ext:213422372473, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5ltkt.179437a78050a3ee" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 03 20:42:34 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:34.237865    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/50516847-8aef-4e1d-9bb2-9db85c6c64ae-webhook-cert") pod "50516847-8aef-4e1d-9bb2-9db85c6c64ae" (UID: "50516847-8aef-4e1d-9bb2-9db85c6c64ae")
	Nov 03 20:42:34 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:34.237902    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-wj5z4" (UniqueName: "kubernetes.io/secret/50516847-8aef-4e1d-9bb2-9db85c6c64ae-ingress-nginx-token-wj5z4") pod "50516847-8aef-4e1d-9bb2-9db85c6c64ae" (UID: "50516847-8aef-4e1d-9bb2-9db85c6c64ae")
	Nov 03 20:42:34 ingress-addon-legacy-656945 kubelet[1859]: W1103 20:42:34.239329    1859 pod_container_deletor.go:77] Container "fe030154af880a46c7e519423f4b51509579f589ad4285521d4f5ac5402af0cd" not found in pod's containers
	Nov 03 20:42:34 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:34.240021    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50516847-8aef-4e1d-9bb2-9db85c6c64ae-ingress-nginx-token-wj5z4" (OuterVolumeSpecName: "ingress-nginx-token-wj5z4") pod "50516847-8aef-4e1d-9bb2-9db85c6c64ae" (UID: "50516847-8aef-4e1d-9bb2-9db85c6c64ae"). InnerVolumeSpecName "ingress-nginx-token-wj5z4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 03 20:42:34 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:34.240253    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50516847-8aef-4e1d-9bb2-9db85c6c64ae-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "50516847-8aef-4e1d-9bb2-9db85c6c64ae" (UID: "50516847-8aef-4e1d-9bb2-9db85c6c64ae"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 03 20:42:34 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:34.338136    1859 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/50516847-8aef-4e1d-9bb2-9db85c6c64ae-webhook-cert") on node "ingress-addon-legacy-656945" DevicePath ""
	Nov 03 20:42:34 ingress-addon-legacy-656945 kubelet[1859]: I1103 20:42:34.338170    1859 reconciler.go:319] Volume detached for volume "ingress-nginx-token-wj5z4" (UniqueName: "kubernetes.io/secret/50516847-8aef-4e1d-9bb2-9db85c6c64ae-ingress-nginx-token-wj5z4") on node "ingress-addon-legacy-656945" DevicePath ""
	
	* 
	* ==> storage-provisioner [3f722ade7fe86158e1a5cde6c09813ca2d38d856183f75607e329012c3a5d06e] <==
	* I1103 20:39:24.293501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1103 20:39:24.302003       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1103 20:39:24.302066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1103 20:39:24.308111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1103 20:39:24.308207       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d99e7e3-d6a2-43b0-ae1c-aebe2341228f", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-656945_a9e0c075-2f11-413d-9ebe-220373b57408 became leader
	I1103 20:39:24.308241       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-656945_a9e0c075-2f11-413d-9ebe-220373b57408!
	I1103 20:39:24.408374       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-656945_a9e0c075-2f11-413d-9ebe-220373b57408!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-656945 -n ingress-addon-legacy-656945
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-656945 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (178.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-5rnbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-5rnbm -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-5rnbm -- sh -c "ping -c 1 192.168.58.1": exit status 1 (177.33395ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-5rnbm): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-z5cz8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-z5cz8 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-z5cz8 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (181.150878ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-z5cz8): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-280480
helpers_test.go:235: (dbg) docker inspect multinode-280480:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8",
	        "Created": "2023-11-03T20:47:39.32462085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 99080,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-03T20:47:39.630832238Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:efd86a3765897881549ab05896b96b2b4ff17749f0a64fb6c355478ceebc8b47",
	        "ResolvConfPath": "/var/lib/docker/containers/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/hostname",
	        "HostsPath": "/var/lib/docker/containers/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/hosts",
	        "LogPath": "/var/lib/docker/containers/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8-json.log",
	        "Name": "/multinode-280480",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-280480:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-280480",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6ef76996f1e83696e83a1cf28d2af75eaa2577df4e06ad2c07dba08797d691ff-init/diff:/var/lib/docker/overlay2/10f966e66ad11ebf0563dbe6bde99d657b975224ac619c4daa8db5a19a2b3420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ef76996f1e83696e83a1cf28d2af75eaa2577df4e06ad2c07dba08797d691ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ef76996f1e83696e83a1cf28d2af75eaa2577df4e06ad2c07dba08797d691ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ef76996f1e83696e83a1cf28d2af75eaa2577df4e06ad2c07dba08797d691ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-280480",
	                "Source": "/var/lib/docker/volumes/multinode-280480/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-280480",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-280480",
	                "name.minikube.sigs.k8s.io": "multinode-280480",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "af1cc20a7c5e9c99a317f4caef59541b8889c259fb3a719d0da9e9357562824b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/af1cc20a7c5e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-280480": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6561f5214f3b",
	                        "multinode-280480"
	                    ],
	                    "NetworkID": "2756f1a3ad86ec7c35e05f27ab489a7a4a814cd8c234159e2c42058af0a6ede0",
	                    "EndpointID": "5b9c5574cc77b8b2ec826d360a32589ed27c854571cf0b050a5f6bd7dd1d58ad",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-280480 -n multinode-280480
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-280480 logs -n 25: (1.143705518s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -p mount-start-2-489412                           | mount-start-2-489412 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	|         | --memory=2048 --mount                             |                      |         |                |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |                |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |                |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |                |                     |                     |
	|         | --driver=docker                                   |                      |         |                |                     |                     |
	|         | --container-runtime=crio                          |                      |         |                |                     |                     |
	| ssh     | mount-start-2-489412 ssh -- ls                    | mount-start-2-489412 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| delete  | -p mount-start-1-472100                           | mount-start-1-472100 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |                |                     |                     |
	| ssh     | mount-start-2-489412 ssh -- ls                    | mount-start-2-489412 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| stop    | -p mount-start-2-489412                           | mount-start-2-489412 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	| start   | -p mount-start-2-489412                           | mount-start-2-489412 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	| ssh     | mount-start-2-489412 ssh -- ls                    | mount-start-2-489412 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| delete  | -p mount-start-2-489412                           | mount-start-2-489412 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	| delete  | -p mount-start-1-472100                           | mount-start-1-472100 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:47 UTC |
	| start   | -p multinode-280480                               | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:47 UTC | 03 Nov 23 20:48 UTC |
	|         | --wait=true --memory=2200                         |                      |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |                |                     |                     |
	|         | --alsologtostderr                                 |                      |         |                |                     |                     |
	|         | --driver=docker                                   |                      |         |                |                     |                     |
	|         | --container-runtime=crio                          |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- apply -f                   | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:48 UTC | 03 Nov 23 20:48 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- rollout                    | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:48 UTC | 03 Nov 23 20:48 UTC |
	|         | status deployment/busybox                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- get pods -o                | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:48 UTC | 03 Nov 23 20:48 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- get pods -o                | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:48 UTC | 03 Nov 23 20:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:48 UTC | 03 Nov 23 20:48 UTC |
	|         | busybox-5bc68d56bd-5rnbm --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:48 UTC | 03 Nov 23 20:48 UTC |
	|         | busybox-5bc68d56bd-z5cz8 --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:48 UTC | 03 Nov 23 20:49 UTC |
	|         | busybox-5bc68d56bd-5rnbm --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:49 UTC | 03 Nov 23 20:49 UTC |
	|         | busybox-5bc68d56bd-z5cz8 --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:49 UTC | 03 Nov 23 20:49 UTC |
	|         | busybox-5bc68d56bd-5rnbm -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:49 UTC | 03 Nov 23 20:49 UTC |
	|         | busybox-5bc68d56bd-z5cz8 -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- get pods -o                | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:49 UTC | 03 Nov 23 20:49 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:49 UTC | 03 Nov 23 20:49 UTC |
	|         | busybox-5bc68d56bd-5rnbm                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:49 UTC |                     |
	|         | busybox-5bc68d56bd-5rnbm -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:49 UTC | 03 Nov 23 20:49 UTC |
	|         | busybox-5bc68d56bd-z5cz8                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-280480 -- exec                       | multinode-280480     | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:49 UTC |                     |
	|         | busybox-5bc68d56bd-z5cz8 -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/03 20:47:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1103 20:47:33.042417   98430 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:47:33.042689   98430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:47:33.042698   98430 out.go:309] Setting ErrFile to fd 2...
	I1103 20:47:33.042702   98430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:47:33.042869   98430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 20:47:33.043419   98430 out.go:303] Setting JSON to false
	I1103 20:47:33.044778   98430 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1803,"bootTime":1699042650,"procs":801,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 20:47:33.044837   98430 start.go:138] virtualization: kvm guest
	I1103 20:47:33.047028   98430 out.go:177] * [multinode-280480] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 20:47:33.048450   98430 out.go:177]   - MINIKUBE_LOCATION=17545
	I1103 20:47:33.049933   98430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 20:47:33.048505   98430 notify.go:220] Checking for updates...
	I1103 20:47:33.051480   98430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:47:33.052924   98430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 20:47:33.054359   98430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1103 20:47:33.055666   98430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1103 20:47:33.057145   98430 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 20:47:33.078109   98430 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 20:47:33.078176   98430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:47:33.127806   98430 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-03 20:47:33.119346228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:47:33.127908   98430 docker.go:295] overlay module found
	I1103 20:47:33.129667   98430 out.go:177] * Using the docker driver based on user configuration
	I1103 20:47:33.131086   98430 start.go:298] selected driver: docker
	I1103 20:47:33.131103   98430 start.go:902] validating driver "docker" against <nil>
	I1103 20:47:33.131113   98430 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1103 20:47:33.131799   98430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:47:33.183261   98430 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-03 20:47:33.175151512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:47:33.183457   98430 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1103 20:47:33.183640   98430 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1103 20:47:33.185288   98430 out.go:177] * Using Docker driver with root privileges
	I1103 20:47:33.186652   98430 cni.go:84] Creating CNI manager for ""
	I1103 20:47:33.186666   98430 cni.go:136] 0 nodes found, recommending kindnet
	I1103 20:47:33.186675   98430 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1103 20:47:33.186685   98430 start_flags.go:323] config:
	{Name:multinode-280480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-280480 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:47:33.188435   98430 out.go:177] * Starting control plane node multinode-280480 in cluster multinode-280480
	I1103 20:47:33.190056   98430 cache.go:121] Beginning downloading kic base image for docker with crio
	I1103 20:47:33.191487   98430 out.go:177] * Pulling base image ...
	I1103 20:47:33.192818   98430 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:47:33.192851   98430 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1103 20:47:33.192858   98430 cache.go:56] Caching tarball of preloaded images
	I1103 20:47:33.192846   98430 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon
	I1103 20:47:33.192952   98430 preload.go:174] Found /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1103 20:47:33.192966   98430 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1103 20:47:33.193315   98430 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/config.json ...
	I1103 20:47:33.193334   98430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/config.json: {Name:mk1ecad10c315ba88a6087b5811f96b034e5d9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:47:33.208191   98430 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon, skipping pull
	I1103 20:47:33.208211   98430 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 exists in daemon, skipping load
	I1103 20:47:33.208228   98430 cache.go:194] Successfully downloaded all kic artifacts
	I1103 20:47:33.208261   98430 start.go:365] acquiring machines lock for multinode-280480: {Name:mk51231fd323354eafc6fe2a9277b0b92aa7378b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:47:33.208350   98430 start.go:369] acquired machines lock for "multinode-280480" in 70.523µs
	I1103 20:47:33.208383   98430 start.go:93] Provisioning new machine with config: &{Name:multinode-280480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-280480 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1103 20:47:33.208497   98430 start.go:125] createHost starting for "" (driver="docker")
	I1103 20:47:33.210341   98430 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1103 20:47:33.210544   98430 start.go:159] libmachine.API.Create for "multinode-280480" (driver="docker")
	I1103 20:47:33.210571   98430 client.go:168] LocalClient.Create starting
	I1103 20:47:33.210629   98430 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem
	I1103 20:47:33.210661   98430 main.go:141] libmachine: Decoding PEM data...
	I1103 20:47:33.210681   98430 main.go:141] libmachine: Parsing certificate...
	I1103 20:47:33.210726   98430 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem
	I1103 20:47:33.210744   98430 main.go:141] libmachine: Decoding PEM data...
	I1103 20:47:33.210752   98430 main.go:141] libmachine: Parsing certificate...
	I1103 20:47:33.211097   98430 cli_runner.go:164] Run: docker network inspect multinode-280480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1103 20:47:33.225985   98430 cli_runner.go:211] docker network inspect multinode-280480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1103 20:47:33.226047   98430 network_create.go:281] running [docker network inspect multinode-280480] to gather additional debugging logs...
	I1103 20:47:33.226076   98430 cli_runner.go:164] Run: docker network inspect multinode-280480
	W1103 20:47:33.241045   98430 cli_runner.go:211] docker network inspect multinode-280480 returned with exit code 1
	I1103 20:47:33.241072   98430 network_create.go:284] error running [docker network inspect multinode-280480]: docker network inspect multinode-280480: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-280480 not found
	I1103 20:47:33.241086   98430 network_create.go:286] output of [docker network inspect multinode-280480]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-280480 not found
	
	** /stderr **
	I1103 20:47:33.241168   98430 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1103 20:47:33.256778   98430 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ea83f8c62ae4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:16:99:01:8a} reservation:<nil>}
	I1103 20:47:33.257212   98430 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00279bdc0}
	I1103 20:47:33.257240   98430 network_create.go:124] attempt to create docker network multinode-280480 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1103 20:47:33.257279   98430 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-280480 multinode-280480
	I1103 20:47:33.308814   98430 network_create.go:108] docker network multinode-280480 192.168.58.0/24 created
	I1103 20:47:33.308845   98430 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-280480" container
	I1103 20:47:33.308909   98430 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1103 20:47:33.324159   98430 cli_runner.go:164] Run: docker volume create multinode-280480 --label name.minikube.sigs.k8s.io=multinode-280480 --label created_by.minikube.sigs.k8s.io=true
	I1103 20:47:33.341552   98430 oci.go:103] Successfully created a docker volume multinode-280480
	I1103 20:47:33.341648   98430 cli_runner.go:164] Run: docker run --rm --name multinode-280480-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-280480 --entrypoint /usr/bin/test -v multinode-280480:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -d /var/lib
	I1103 20:47:33.849864   98430 oci.go:107] Successfully prepared a docker volume multinode-280480
	I1103 20:47:33.849919   98430 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:47:33.849938   98430 kic.go:194] Starting extracting preloaded images to volume ...
	I1103 20:47:33.849992   98430 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-280480:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -I lz4 -xf /preloaded.tar -C /extractDir
	I1103 20:47:39.255246   98430 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-280480:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -I lz4 -xf /preloaded.tar -C /extractDir: (5.405206696s)
	I1103 20:47:39.255277   98430 kic.go:203] duration metric: took 5.405337 seconds to extract preloaded images to volume
	W1103 20:47:39.255397   98430 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1103 20:47:39.255491   98430 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1103 20:47:39.310251   98430 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-280480 --name multinode-280480 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-280480 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-280480 --network multinode-280480 --ip 192.168.58.2 --volume multinode-280480:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89
	I1103 20:47:39.638763   98430 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Running}}
	I1103 20:47:39.656332   98430 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Status}}
	I1103 20:47:39.672942   98430 cli_runner.go:164] Run: docker exec multinode-280480 stat /var/lib/dpkg/alternatives/iptables
	I1103 20:47:39.734509   98430 oci.go:144] the created container "multinode-280480" has a running status.
	I1103 20:47:39.734553   98430 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa...
	I1103 20:47:39.982919   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1103 20:47:39.982959   98430 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1103 20:47:40.004001   98430 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Status}}
	I1103 20:47:40.026642   98430 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1103 20:47:40.026662   98430 kic_runner.go:114] Args: [docker exec --privileged multinode-280480 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1103 20:47:40.099675   98430 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Status}}
	I1103 20:47:40.121817   98430 machine.go:88] provisioning docker machine ...
	I1103 20:47:40.121856   98430 ubuntu.go:169] provisioning hostname "multinode-280480"
	I1103 20:47:40.121940   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:47:40.141668   98430 main.go:141] libmachine: Using SSH client type: native
	I1103 20:47:40.142028   98430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1103 20:47:40.142045   98430 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-280480 && echo "multinode-280480" | sudo tee /etc/hostname
	I1103 20:47:40.310030   98430 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-280480
	
	I1103 20:47:40.310167   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:47:40.327619   98430 main.go:141] libmachine: Using SSH client type: native
	I1103 20:47:40.327992   98430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1103 20:47:40.328014   98430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-280480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-280480/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-280480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1103 20:47:40.451971   98430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1103 20:47:40.452001   98430 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17545-5130/.minikube CaCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17545-5130/.minikube}
	I1103 20:47:40.452029   98430 ubuntu.go:177] setting up certificates
	I1103 20:47:40.452040   98430 provision.go:83] configureAuth start
	I1103 20:47:40.452094   98430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280480
	I1103 20:47:40.467672   98430 provision.go:138] copyHostCerts
	I1103 20:47:40.467710   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem
	I1103 20:47:40.467743   98430 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem, removing ...
	I1103 20:47:40.467752   98430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem
	I1103 20:47:40.467819   98430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem (1082 bytes)
	I1103 20:47:40.467914   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem
	I1103 20:47:40.467941   98430 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem, removing ...
	I1103 20:47:40.467950   98430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem
	I1103 20:47:40.467982   98430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem (1123 bytes)
	I1103 20:47:40.468040   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem
	I1103 20:47:40.468061   98430 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem, removing ...
	I1103 20:47:40.468066   98430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem
	I1103 20:47:40.468099   98430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem (1679 bytes)
	I1103 20:47:40.468154   98430 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem org=jenkins.multinode-280480 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-280480]
	I1103 20:47:40.765242   98430 provision.go:172] copyRemoteCerts
	I1103 20:47:40.765300   98430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1103 20:47:40.765330   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:47:40.781457   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:47:40.868080   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1103 20:47:40.868147   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1103 20:47:40.888477   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1103 20:47:40.888542   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1103 20:47:40.908674   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1103 20:47:40.908733   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1103 20:47:40.928572   98430 provision.go:86] duration metric: configureAuth took 476.516937ms
	I1103 20:47:40.928596   98430 ubuntu.go:193] setting minikube options for container-runtime
	I1103 20:47:40.928755   98430 config.go:182] Loaded profile config "multinode-280480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:47:40.928840   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:47:40.944162   98430 main.go:141] libmachine: Using SSH client type: native
	I1103 20:47:40.944530   98430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1103 20:47:40.944555   98430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1103 20:47:41.141867   98430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1103 20:47:41.141895   98430 machine.go:91] provisioned docker machine in 1.020054383s
	I1103 20:47:41.141904   98430 client.go:171] LocalClient.Create took 7.931327923s
	I1103 20:47:41.141920   98430 start.go:167] duration metric: libmachine.API.Create for "multinode-280480" took 7.931376493s
	I1103 20:47:41.141928   98430 start.go:300] post-start starting for "multinode-280480" (driver="docker")
	I1103 20:47:41.141937   98430 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1103 20:47:41.141985   98430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1103 20:47:41.142034   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:47:41.157527   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:47:41.244806   98430 ssh_runner.go:195] Run: cat /etc/os-release
	I1103 20:47:41.247619   98430 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1103 20:47:41.247643   98430 command_runner.go:130] > NAME="Ubuntu"
	I1103 20:47:41.247652   98430 command_runner.go:130] > VERSION_ID="22.04"
	I1103 20:47:41.247660   98430 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1103 20:47:41.247666   98430 command_runner.go:130] > VERSION_CODENAME=jammy
	I1103 20:47:41.247670   98430 command_runner.go:130] > ID=ubuntu
	I1103 20:47:41.247685   98430 command_runner.go:130] > ID_LIKE=debian
	I1103 20:47:41.247698   98430 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1103 20:47:41.247705   98430 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1103 20:47:41.247716   98430 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1103 20:47:41.247728   98430 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1103 20:47:41.247736   98430 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1103 20:47:41.247787   98430 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1103 20:47:41.247822   98430 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1103 20:47:41.247836   98430 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1103 20:47:41.247842   98430 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1103 20:47:41.247852   98430 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/addons for local assets ...
	I1103 20:47:41.247895   98430 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/files for local assets ...
	I1103 20:47:41.247968   98430 filesync.go:149] local asset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> 118872.pem in /etc/ssl/certs
	I1103 20:47:41.247978   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> /etc/ssl/certs/118872.pem
	I1103 20:47:41.248055   98430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1103 20:47:41.255222   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem --> /etc/ssl/certs/118872.pem (1708 bytes)
	I1103 20:47:41.275014   98430 start.go:303] post-start completed in 133.076716ms
	I1103 20:47:41.275339   98430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280480
	I1103 20:47:41.290983   98430 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/config.json ...
	I1103 20:47:41.291195   98430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1103 20:47:41.291232   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:47:41.308442   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:47:41.396913   98430 command_runner.go:130] > 20%!
	(MISSING)I1103 20:47:41.397107   98430 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1103 20:47:41.400753   98430 command_runner.go:130] > 235G
	I1103 20:47:41.400911   98430 start.go:128] duration metric: createHost completed in 8.19240214s
	I1103 20:47:41.400931   98430 start.go:83] releasing machines lock for "multinode-280480", held for 8.192566088s
	I1103 20:47:41.400981   98430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280480
	I1103 20:47:41.416815   98430 ssh_runner.go:195] Run: cat /version.json
	I1103 20:47:41.416859   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:47:41.416930   98430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1103 20:47:41.416986   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:47:41.434358   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:47:41.434519   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:47:41.603004   98430 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1103 20:47:41.603063   98430 command_runner.go:130] > {"iso_version": "v1.32.0-1698773592-17486", "kicbase_version": "v0.0.41-1698881667-17516", "minikube_version": "v1.32.0-beta.0", "commit": "0a350ba0616fdc433f0bbebfe065f409f07951cc"}
	I1103 20:47:41.603165   98430 ssh_runner.go:195] Run: systemctl --version
	I1103 20:47:41.607110   98430 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1103 20:47:41.607151   98430 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1103 20:47:41.607194   98430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1103 20:47:41.741425   98430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1103 20:47:41.745439   98430 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1103 20:47:41.745465   98430 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1103 20:47:41.745489   98430 command_runner.go:130] > Device: 33h/51d	Inode: 540722      Links: 1
	I1103 20:47:41.745504   98430 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1103 20:47:41.745518   98430 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1103 20:47:41.745525   98430 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1103 20:47:41.745532   98430 command_runner.go:130] > Change: 2023-11-03 20:29:19.315787835 +0000
	I1103 20:47:41.745544   98430 command_runner.go:130] >  Birth: 2023-11-03 20:29:19.315787835 +0000
	I1103 20:47:41.745607   98430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:47:41.762502   98430 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1103 20:47:41.762583   98430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:47:41.787231   98430 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1103 20:47:41.787266   98430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1103 20:47:41.787276   98430 start.go:472] detecting cgroup driver to use...
	I1103 20:47:41.787308   98430 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1103 20:47:41.787356   98430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1103 20:47:41.800473   98430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1103 20:47:41.809485   98430 docker.go:203] disabling cri-docker service (if available) ...
	I1103 20:47:41.809527   98430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1103 20:47:41.820943   98430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1103 20:47:41.832943   98430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1103 20:47:41.904398   98430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1103 20:47:41.984046   98430 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1103 20:47:41.984083   98430 docker.go:219] disabling docker service ...
	I1103 20:47:41.984126   98430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1103 20:47:42.000726   98430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1103 20:47:42.010851   98430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1103 20:47:42.083465   98430 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1103 20:47:42.083531   98430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1103 20:47:42.163680   98430 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1103 20:47:42.163737   98430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1103 20:47:42.173253   98430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1103 20:47:42.186073   98430 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1103 20:47:42.186885   98430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1103 20:47:42.186926   98430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:47:42.195029   98430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1103 20:47:42.195082   98430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:47:42.203026   98430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:47:42.211122   98430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:47:42.219117   98430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1103 20:47:42.226817   98430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1103 20:47:42.233256   98430 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1103 20:47:42.233902   98430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1103 20:47:42.240800   98430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1103 20:47:42.313990   98430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1103 20:47:42.417000   98430 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1103 20:47:42.417054   98430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1103 20:47:42.420565   98430 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1103 20:47:42.420593   98430 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1103 20:47:42.420603   98430 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1103 20:47:42.420610   98430 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1103 20:47:42.420615   98430 command_runner.go:130] > Access: 2023-11-03 20:47:42.401606602 +0000
	I1103 20:47:42.420621   98430 command_runner.go:130] > Modify: 2023-11-03 20:47:42.401606602 +0000
	I1103 20:47:42.420626   98430 command_runner.go:130] > Change: 2023-11-03 20:47:42.401606602 +0000
	I1103 20:47:42.420630   98430 command_runner.go:130] >  Birth: -
	I1103 20:47:42.420645   98430 start.go:540] Will wait 60s for crictl version
	I1103 20:47:42.420677   98430 ssh_runner.go:195] Run: which crictl
	I1103 20:47:42.424483   98430 command_runner.go:130] > /usr/bin/crictl
	I1103 20:47:42.424561   98430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1103 20:47:42.453951   98430 command_runner.go:130] > Version:  0.1.0
	I1103 20:47:42.453977   98430 command_runner.go:130] > RuntimeName:  cri-o
	I1103 20:47:42.453989   98430 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1103 20:47:42.454004   98430 command_runner.go:130] > RuntimeApiVersion:  v1
	I1103 20:47:42.455680   98430 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1103 20:47:42.455754   98430 ssh_runner.go:195] Run: crio --version
	I1103 20:47:42.486987   98430 command_runner.go:130] > crio version 1.24.6
	I1103 20:47:42.487013   98430 command_runner.go:130] > Version:          1.24.6
	I1103 20:47:42.487025   98430 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1103 20:47:42.487032   98430 command_runner.go:130] > GitTreeState:     clean
	I1103 20:47:42.487041   98430 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1103 20:47:42.487050   98430 command_runner.go:130] > GoVersion:        go1.18.2
	I1103 20:47:42.487061   98430 command_runner.go:130] > Compiler:         gc
	I1103 20:47:42.487068   98430 command_runner.go:130] > Platform:         linux/amd64
	I1103 20:47:42.487073   98430 command_runner.go:130] > Linkmode:         dynamic
	I1103 20:47:42.487085   98430 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1103 20:47:42.487090   98430 command_runner.go:130] > SeccompEnabled:   true
	I1103 20:47:42.487094   98430 command_runner.go:130] > AppArmorEnabled:  false
	I1103 20:47:42.487152   98430 ssh_runner.go:195] Run: crio --version
	I1103 20:47:42.517149   98430 command_runner.go:130] > crio version 1.24.6
	I1103 20:47:42.517176   98430 command_runner.go:130] > Version:          1.24.6
	I1103 20:47:42.517187   98430 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1103 20:47:42.517195   98430 command_runner.go:130] > GitTreeState:     clean
	I1103 20:47:42.517204   98430 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1103 20:47:42.517211   98430 command_runner.go:130] > GoVersion:        go1.18.2
	I1103 20:47:42.517218   98430 command_runner.go:130] > Compiler:         gc
	I1103 20:47:42.517224   98430 command_runner.go:130] > Platform:         linux/amd64
	I1103 20:47:42.517241   98430 command_runner.go:130] > Linkmode:         dynamic
	I1103 20:47:42.517253   98430 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1103 20:47:42.517261   98430 command_runner.go:130] > SeccompEnabled:   true
	I1103 20:47:42.517265   98430 command_runner.go:130] > AppArmorEnabled:  false
	I1103 20:47:42.520415   98430 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1103 20:47:42.521992   98430 cli_runner.go:164] Run: docker network inspect multinode-280480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1103 20:47:42.537459   98430 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1103 20:47:42.540744   98430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1103 20:47:42.550279   98430 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:47:42.550329   98430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1103 20:47:42.599634   98430 command_runner.go:130] > {
	I1103 20:47:42.599660   98430 command_runner.go:130] >   "images": [
	I1103 20:47:42.599668   98430 command_runner.go:130] >     {
	I1103 20:47:42.599681   98430 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1103 20:47:42.599690   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.599700   98430 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1103 20:47:42.599707   98430 command_runner.go:130] >       ],
	I1103 20:47:42.599711   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.599720   98430 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1103 20:47:42.599730   98430 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1103 20:47:42.599737   98430 command_runner.go:130] >       ],
	I1103 20:47:42.599743   98430 command_runner.go:130] >       "size": "65258016",
	I1103 20:47:42.599749   98430 command_runner.go:130] >       "uid": null,
	I1103 20:47:42.599759   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.599770   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.599780   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.599787   98430 command_runner.go:130] >     },
	I1103 20:47:42.599790   98430 command_runner.go:130] >     {
	I1103 20:47:42.599796   98430 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1103 20:47:42.599807   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.599812   98430 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1103 20:47:42.599819   98430 command_runner.go:130] >       ],
	I1103 20:47:42.599826   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.599836   98430 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1103 20:47:42.599844   98430 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1103 20:47:42.599853   98430 command_runner.go:130] >       ],
	I1103 20:47:42.599860   98430 command_runner.go:130] >       "size": "31470524",
	I1103 20:47:42.599866   98430 command_runner.go:130] >       "uid": null,
	I1103 20:47:42.599874   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.599881   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.599885   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.599895   98430 command_runner.go:130] >     },
	I1103 20:47:42.599899   98430 command_runner.go:130] >     {
	I1103 20:47:42.599908   98430 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1103 20:47:42.599912   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.599920   98430 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1103 20:47:42.599927   98430 command_runner.go:130] >       ],
	I1103 20:47:42.599932   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.599941   98430 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1103 20:47:42.599951   98430 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1103 20:47:42.599957   98430 command_runner.go:130] >       ],
	I1103 20:47:42.599961   98430 command_runner.go:130] >       "size": "53621675",
	I1103 20:47:42.599966   98430 command_runner.go:130] >       "uid": null,
	I1103 20:47:42.599971   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.599977   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.599981   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.599985   98430 command_runner.go:130] >     },
	I1103 20:47:42.599988   98430 command_runner.go:130] >     {
	I1103 20:47:42.599994   98430 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1103 20:47:42.599999   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.600007   98430 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1103 20:47:42.600014   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600018   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.600030   98430 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1103 20:47:42.600039   98430 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1103 20:47:42.600047   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600057   98430 command_runner.go:130] >       "size": "295456551",
	I1103 20:47:42.600063   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.600079   98430 command_runner.go:130] >         "value": "0"
	I1103 20:47:42.600086   98430 command_runner.go:130] >       },
	I1103 20:47:42.600090   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.600098   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.600104   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.600108   98430 command_runner.go:130] >     },
	I1103 20:47:42.600114   98430 command_runner.go:130] >     {
	I1103 20:47:42.600124   98430 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1103 20:47:42.600131   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.600136   98430 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1103 20:47:42.600143   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600148   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.600157   98430 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1103 20:47:42.600171   98430 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1103 20:47:42.600178   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600183   98430 command_runner.go:130] >       "size": "127165392",
	I1103 20:47:42.600189   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.600193   98430 command_runner.go:130] >         "value": "0"
	I1103 20:47:42.600197   98430 command_runner.go:130] >       },
	I1103 20:47:42.600201   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.600207   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.600212   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.600215   98430 command_runner.go:130] >     },
	I1103 20:47:42.600219   98430 command_runner.go:130] >     {
	I1103 20:47:42.600226   98430 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1103 20:47:42.600232   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.600238   98430 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1103 20:47:42.600245   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600249   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.600259   98430 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1103 20:47:42.600269   98430 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1103 20:47:42.600275   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600279   98430 command_runner.go:130] >       "size": "123188534",
	I1103 20:47:42.600285   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.600289   98430 command_runner.go:130] >         "value": "0"
	I1103 20:47:42.600293   98430 command_runner.go:130] >       },
	I1103 20:47:42.600297   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.600301   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.600306   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.600312   98430 command_runner.go:130] >     },
	I1103 20:47:42.600315   98430 command_runner.go:130] >     {
	I1103 20:47:42.600321   98430 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1103 20:47:42.600328   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.600332   98430 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1103 20:47:42.600337   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600344   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.600354   98430 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1103 20:47:42.600361   98430 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1103 20:47:42.600380   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600386   98430 command_runner.go:130] >       "size": "74691991",
	I1103 20:47:42.600390   98430 command_runner.go:130] >       "uid": null,
	I1103 20:47:42.600398   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.600407   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.600412   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.600444   98430 command_runner.go:130] >     },
	I1103 20:47:42.600455   98430 command_runner.go:130] >     {
	I1103 20:47:42.600461   98430 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1103 20:47:42.600469   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.600474   98430 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1103 20:47:42.600484   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600489   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.600541   98430 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1103 20:47:42.600566   98430 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1103 20:47:42.600573   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600580   98430 command_runner.go:130] >       "size": "61498678",
	I1103 20:47:42.600596   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.600610   98430 command_runner.go:130] >         "value": "0"
	I1103 20:47:42.600617   98430 command_runner.go:130] >       },
	I1103 20:47:42.600622   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.600631   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.600635   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.600640   98430 command_runner.go:130] >     },
	I1103 20:47:42.600645   98430 command_runner.go:130] >     {
	I1103 20:47:42.600653   98430 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1103 20:47:42.600657   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.600664   98430 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1103 20:47:42.600668   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600672   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.600682   98430 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1103 20:47:42.600689   98430 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1103 20:47:42.600695   98430 command_runner.go:130] >       ],
	I1103 20:47:42.600700   98430 command_runner.go:130] >       "size": "750414",
	I1103 20:47:42.600706   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.600710   98430 command_runner.go:130] >         "value": "65535"
	I1103 20:47:42.600716   98430 command_runner.go:130] >       },
	I1103 20:47:42.600720   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.600727   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.600731   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.600737   98430 command_runner.go:130] >     }
	I1103 20:47:42.600741   98430 command_runner.go:130] >   ]
	I1103 20:47:42.600747   98430 command_runner.go:130] > }
	I1103 20:47:42.601811   98430 crio.go:496] all images are preloaded for cri-o runtime.
	I1103 20:47:42.601829   98430 crio.go:415] Images already preloaded, skipping extraction
	I1103 20:47:42.601868   98430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1103 20:47:42.630763   98430 command_runner.go:130] > {
	I1103 20:47:42.630788   98430 command_runner.go:130] >   "images": [
	I1103 20:47:42.630794   98430 command_runner.go:130] >     {
	I1103 20:47:42.630809   98430 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1103 20:47:42.630818   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.630828   98430 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1103 20:47:42.630839   98430 command_runner.go:130] >       ],
	I1103 20:47:42.630847   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.630865   98430 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1103 20:47:42.630881   98430 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1103 20:47:42.630891   98430 command_runner.go:130] >       ],
	I1103 20:47:42.630900   98430 command_runner.go:130] >       "size": "65258016",
	I1103 20:47:42.630911   98430 command_runner.go:130] >       "uid": null,
	I1103 20:47:42.630922   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.630934   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.630945   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.630954   98430 command_runner.go:130] >     },
	I1103 20:47:42.630961   98430 command_runner.go:130] >     {
	I1103 20:47:42.630975   98430 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1103 20:47:42.630985   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.630995   98430 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1103 20:47:42.631000   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631006   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.631020   98430 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1103 20:47:42.631034   98430 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1103 20:47:42.631040   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631050   98430 command_runner.go:130] >       "size": "31470524",
	I1103 20:47:42.631057   98430 command_runner.go:130] >       "uid": null,
	I1103 20:47:42.631064   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.631071   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.631079   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.631088   98430 command_runner.go:130] >     },
	I1103 20:47:42.631094   98430 command_runner.go:130] >     {
	I1103 20:47:42.631105   98430 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1103 20:47:42.631116   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.631125   98430 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1103 20:47:42.631135   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631142   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.631157   98430 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1103 20:47:42.631170   98430 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1103 20:47:42.631179   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631189   98430 command_runner.go:130] >       "size": "53621675",
	I1103 20:47:42.631198   98430 command_runner.go:130] >       "uid": null,
	I1103 20:47:42.631207   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.631214   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.631223   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.631228   98430 command_runner.go:130] >     },
	I1103 20:47:42.631238   98430 command_runner.go:130] >     {
	I1103 20:47:42.631248   98430 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1103 20:47:42.631258   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.631266   98430 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1103 20:47:42.631275   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631282   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.631295   98430 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1103 20:47:42.631309   98430 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1103 20:47:42.631322   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631337   98430 command_runner.go:130] >       "size": "295456551",
	I1103 20:47:42.631346   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.631353   98430 command_runner.go:130] >         "value": "0"
	I1103 20:47:42.631361   98430 command_runner.go:130] >       },
	I1103 20:47:42.631367   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.631377   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.631384   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.631392   98430 command_runner.go:130] >     },
	I1103 20:47:42.631401   98430 command_runner.go:130] >     {
	I1103 20:47:42.631412   98430 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1103 20:47:42.631422   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.631431   98430 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1103 20:47:42.631440   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631447   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.631461   98430 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1103 20:47:42.631476   98430 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1103 20:47:42.631485   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631492   98430 command_runner.go:130] >       "size": "127165392",
	I1103 20:47:42.631507   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.631518   98430 command_runner.go:130] >         "value": "0"
	I1103 20:47:42.631524   98430 command_runner.go:130] >       },
	I1103 20:47:42.631534   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.631540   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.631549   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.631554   98430 command_runner.go:130] >     },
	I1103 20:47:42.631563   98430 command_runner.go:130] >     {
	I1103 20:47:42.631572   98430 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1103 20:47:42.631582   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.631590   98430 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1103 20:47:42.631599   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631606   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.631620   98430 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1103 20:47:42.631635   98430 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1103 20:47:42.631644   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631651   98430 command_runner.go:130] >       "size": "123188534",
	I1103 20:47:42.631661   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.631671   98430 command_runner.go:130] >         "value": "0"
	I1103 20:47:42.631677   98430 command_runner.go:130] >       },
	I1103 20:47:42.631687   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.631697   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.631707   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.631714   98430 command_runner.go:130] >     },
	I1103 20:47:42.631723   98430 command_runner.go:130] >     {
	I1103 20:47:42.631734   98430 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1103 20:47:42.631744   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.631756   98430 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1103 20:47:42.631766   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631775   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.631791   98430 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1103 20:47:42.631806   98430 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1103 20:47:42.631816   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631823   98430 command_runner.go:130] >       "size": "74691991",
	I1103 20:47:42.631834   98430 command_runner.go:130] >       "uid": null,
	I1103 20:47:42.631844   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.631857   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.631868   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.631879   98430 command_runner.go:130] >     },
	I1103 20:47:42.631886   98430 command_runner.go:130] >     {
	I1103 20:47:42.631904   98430 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1103 20:47:42.631914   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.631927   98430 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1103 20:47:42.631939   98430 command_runner.go:130] >       ],
	I1103 20:47:42.631948   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.632005   98430 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1103 20:47:42.632024   98430 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1103 20:47:42.632031   98430 command_runner.go:130] >       ],
	I1103 20:47:42.632040   98430 command_runner.go:130] >       "size": "61498678",
	I1103 20:47:42.632051   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.632058   98430 command_runner.go:130] >         "value": "0"
	I1103 20:47:42.632067   98430 command_runner.go:130] >       },
	I1103 20:47:42.632074   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.632084   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.632094   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.632102   98430 command_runner.go:130] >     },
	I1103 20:47:42.632106   98430 command_runner.go:130] >     {
	I1103 20:47:42.632116   98430 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1103 20:47:42.632121   98430 command_runner.go:130] >       "repoTags": [
	I1103 20:47:42.632131   98430 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1103 20:47:42.632141   98430 command_runner.go:130] >       ],
	I1103 20:47:42.632148   98430 command_runner.go:130] >       "repoDigests": [
	I1103 20:47:42.632164   98430 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1103 20:47:42.632179   98430 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1103 20:47:42.632188   98430 command_runner.go:130] >       ],
	I1103 20:47:42.632199   98430 command_runner.go:130] >       "size": "750414",
	I1103 20:47:42.632208   98430 command_runner.go:130] >       "uid": {
	I1103 20:47:42.632213   98430 command_runner.go:130] >         "value": "65535"
	I1103 20:47:42.632219   98430 command_runner.go:130] >       },
	I1103 20:47:42.632226   98430 command_runner.go:130] >       "username": "",
	I1103 20:47:42.632236   98430 command_runner.go:130] >       "spec": null,
	I1103 20:47:42.632244   98430 command_runner.go:130] >       "pinned": false
	I1103 20:47:42.632254   98430 command_runner.go:130] >     }
	I1103 20:47:42.632260   98430 command_runner.go:130] >   ]
	I1103 20:47:42.632269   98430 command_runner.go:130] > }
	I1103 20:47:42.633115   98430 crio.go:496] all images are preloaded for cri-o runtime.
	I1103 20:47:42.633138   98430 cache_images.go:84] Images are preloaded, skipping loading
	I1103 20:47:42.633195   98430 ssh_runner.go:195] Run: crio config
	I1103 20:47:42.669181   98430 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1103 20:47:42.669215   98430 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1103 20:47:42.669227   98430 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1103 20:47:42.669233   98430 command_runner.go:130] > #
	I1103 20:47:42.669244   98430 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1103 20:47:42.669255   98430 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1103 20:47:42.669265   98430 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1103 20:47:42.669281   98430 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1103 20:47:42.669291   98430 command_runner.go:130] > # reload'.
	I1103 20:47:42.669302   98430 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1103 20:47:42.669312   98430 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1103 20:47:42.669318   98430 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1103 20:47:42.669327   98430 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1103 20:47:42.669331   98430 command_runner.go:130] > [crio]
	I1103 20:47:42.669339   98430 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1103 20:47:42.669350   98430 command_runner.go:130] > # containers images, in this directory.
	I1103 20:47:42.669361   98430 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1103 20:47:42.669375   98430 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1103 20:47:42.669387   98430 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1103 20:47:42.669401   98430 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1103 20:47:42.669414   98430 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1103 20:47:42.669424   98430 command_runner.go:130] > # storage_driver = "vfs"
	I1103 20:47:42.669437   98430 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1103 20:47:42.669451   98430 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1103 20:47:42.669462   98430 command_runner.go:130] > # storage_option = [
	I1103 20:47:42.669468   98430 command_runner.go:130] > # ]
	I1103 20:47:42.669483   98430 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1103 20:47:42.669496   98430 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1103 20:47:42.669507   98430 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1103 20:47:42.669519   98430 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1103 20:47:42.669532   98430 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1103 20:47:42.669541   98430 command_runner.go:130] > # always happen on a node reboot
	I1103 20:47:42.669552   98430 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1103 20:47:42.669567   98430 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1103 20:47:42.669580   98430 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1103 20:47:42.669596   98430 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1103 20:47:42.669609   98430 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1103 20:47:42.669627   98430 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1103 20:47:42.669647   98430 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1103 20:47:42.669654   98430 command_runner.go:130] > # internal_wipe = true
	I1103 20:47:42.669664   98430 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1103 20:47:42.669681   98430 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1103 20:47:42.669690   98430 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1103 20:47:42.669699   98430 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1103 20:47:42.669711   98430 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1103 20:47:42.669717   98430 command_runner.go:130] > [crio.api]
	I1103 20:47:42.669726   98430 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1103 20:47:42.669738   98430 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1103 20:47:42.669750   98430 command_runner.go:130] > # IP address on which the stream server will listen.
	I1103 20:47:42.669761   98430 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1103 20:47:42.669773   98430 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1103 20:47:42.669785   98430 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1103 20:47:42.669795   98430 command_runner.go:130] > # stream_port = "0"
	I1103 20:47:42.669806   98430 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1103 20:47:42.669816   98430 command_runner.go:130] > # stream_enable_tls = false
	I1103 20:47:42.669828   98430 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1103 20:47:42.669839   98430 command_runner.go:130] > # stream_idle_timeout = ""
	I1103 20:47:42.669852   98430 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1103 20:47:42.669871   98430 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1103 20:47:42.669882   98430 command_runner.go:130] > # minutes.
	I1103 20:47:42.669888   98430 command_runner.go:130] > # stream_tls_cert = ""
	I1103 20:47:42.669903   98430 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1103 20:47:42.669917   98430 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1103 20:47:42.669927   98430 command_runner.go:130] > # stream_tls_key = ""
	I1103 20:47:42.669938   98430 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1103 20:47:42.669951   98430 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1103 20:47:42.669962   98430 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1103 20:47:42.669972   98430 command_runner.go:130] > # stream_tls_ca = ""
	I1103 20:47:42.669985   98430 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1103 20:47:42.669996   98430 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1103 20:47:42.670008   98430 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1103 20:47:42.670068   98430 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1103 20:47:42.670094   98430 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1103 20:47:42.670107   98430 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1103 20:47:42.670114   98430 command_runner.go:130] > [crio.runtime]
	I1103 20:47:42.670128   98430 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1103 20:47:42.670138   98430 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1103 20:47:42.670143   98430 command_runner.go:130] > # "nofile=1024:2048"
	I1103 20:47:42.670149   98430 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1103 20:47:42.670154   98430 command_runner.go:130] > # default_ulimits = [
	I1103 20:47:42.670157   98430 command_runner.go:130] > # ]
	I1103 20:47:42.670170   98430 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1103 20:47:42.670178   98430 command_runner.go:130] > # no_pivot = false
	I1103 20:47:42.670192   98430 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1103 20:47:42.670203   98430 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1103 20:47:42.670215   98430 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1103 20:47:42.670228   98430 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1103 20:47:42.670237   98430 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1103 20:47:42.670253   98430 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1103 20:47:42.670260   98430 command_runner.go:130] > # conmon = ""
	I1103 20:47:42.670268   98430 command_runner.go:130] > # Cgroup setting for conmon
	I1103 20:47:42.670283   98430 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1103 20:47:42.670294   98430 command_runner.go:130] > conmon_cgroup = "pod"
	I1103 20:47:42.670304   98430 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1103 20:47:42.670319   98430 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1103 20:47:42.670331   98430 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1103 20:47:42.670342   98430 command_runner.go:130] > # conmon_env = [
	I1103 20:47:42.670349   98430 command_runner.go:130] > # ]
	I1103 20:47:42.670358   98430 command_runner.go:130] > # Additional environment variables to set for all the
	I1103 20:47:42.670366   98430 command_runner.go:130] > # containers. These are overridden if set in the
	I1103 20:47:42.670381   98430 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1103 20:47:42.670392   98430 command_runner.go:130] > # default_env = [
	I1103 20:47:42.670400   98430 command_runner.go:130] > # ]
	I1103 20:47:42.670410   98430 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1103 20:47:42.670420   98430 command_runner.go:130] > # selinux = false
	I1103 20:47:42.670431   98430 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1103 20:47:42.670447   98430 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1103 20:47:42.670460   98430 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1103 20:47:42.670470   98430 command_runner.go:130] > # seccomp_profile = ""
	I1103 20:47:42.670480   98430 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1103 20:47:42.670494   98430 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1103 20:47:42.670507   98430 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1103 20:47:42.670515   98430 command_runner.go:130] > # which might increase security.
	I1103 20:47:42.670528   98430 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1103 20:47:42.670537   98430 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1103 20:47:42.670546   98430 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1103 20:47:42.670556   98430 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1103 20:47:42.670566   98430 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1103 20:47:42.670574   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:47:42.670582   98430 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1103 20:47:42.670591   98430 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1103 20:47:42.670599   98430 command_runner.go:130] > # the cgroup blockio controller.
	I1103 20:47:42.670606   98430 command_runner.go:130] > # blockio_config_file = ""
	I1103 20:47:42.670622   98430 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1103 20:47:42.670630   98430 command_runner.go:130] > # irqbalance daemon.
	I1103 20:47:42.670639   98430 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1103 20:47:42.670650   98430 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1103 20:47:42.670658   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:47:42.670701   98430 command_runner.go:130] > # rdt_config_file = ""
	I1103 20:47:42.670723   98430 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1103 20:47:42.670730   98430 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1103 20:47:42.670737   98430 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1103 20:47:42.670741   98430 command_runner.go:130] > # separate_pull_cgroup = ""
	I1103 20:47:42.670748   98430 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1103 20:47:42.670754   98430 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1103 20:47:42.670758   98430 command_runner.go:130] > # will be added.
	I1103 20:47:42.670762   98430 command_runner.go:130] > # default_capabilities = [
	I1103 20:47:42.670765   98430 command_runner.go:130] > # 	"CHOWN",
	I1103 20:47:42.670769   98430 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1103 20:47:42.670773   98430 command_runner.go:130] > # 	"FSETID",
	I1103 20:47:42.670777   98430 command_runner.go:130] > # 	"FOWNER",
	I1103 20:47:42.670780   98430 command_runner.go:130] > # 	"SETGID",
	I1103 20:47:42.670784   98430 command_runner.go:130] > # 	"SETUID",
	I1103 20:47:42.670788   98430 command_runner.go:130] > # 	"SETPCAP",
	I1103 20:47:42.670792   98430 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1103 20:47:42.670795   98430 command_runner.go:130] > # 	"KILL",
	I1103 20:47:42.670799   98430 command_runner.go:130] > # ]
	I1103 20:47:42.670806   98430 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1103 20:47:42.670813   98430 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1103 20:47:42.670817   98430 command_runner.go:130] > # add_inheritable_capabilities = true
	I1103 20:47:42.670823   98430 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1103 20:47:42.670829   98430 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1103 20:47:42.670832   98430 command_runner.go:130] > # default_sysctls = [
	I1103 20:47:42.670836   98430 command_runner.go:130] > # ]
	I1103 20:47:42.670845   98430 command_runner.go:130] > # List of devices on the host that a
	I1103 20:47:42.670851   98430 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1103 20:47:42.670855   98430 command_runner.go:130] > # allowed_devices = [
	I1103 20:47:42.670858   98430 command_runner.go:130] > # 	"/dev/fuse",
	I1103 20:47:42.670861   98430 command_runner.go:130] > # ]
	I1103 20:47:42.670867   98430 command_runner.go:130] > # List of additional devices. specified as
	I1103 20:47:42.670923   98430 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1103 20:47:42.670929   98430 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1103 20:47:42.670934   98430 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1103 20:47:42.670938   98430 command_runner.go:130] > # additional_devices = [
	I1103 20:47:42.670941   98430 command_runner.go:130] > # ]
	I1103 20:47:42.670946   98430 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1103 20:47:42.670951   98430 command_runner.go:130] > # cdi_spec_dirs = [
	I1103 20:47:42.670955   98430 command_runner.go:130] > # 	"/etc/cdi",
	I1103 20:47:42.670958   98430 command_runner.go:130] > # 	"/var/run/cdi",
	I1103 20:47:42.670962   98430 command_runner.go:130] > # ]
	I1103 20:47:42.670968   98430 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1103 20:47:42.670973   98430 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1103 20:47:42.670977   98430 command_runner.go:130] > # Defaults to false.
	I1103 20:47:42.670981   98430 command_runner.go:130] > # device_ownership_from_security_context = false
	I1103 20:47:42.670987   98430 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1103 20:47:42.670993   98430 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1103 20:47:42.670996   98430 command_runner.go:130] > # hooks_dir = [
	I1103 20:47:42.671003   98430 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1103 20:47:42.671006   98430 command_runner.go:130] > # ]
	I1103 20:47:42.671012   98430 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1103 20:47:42.671018   98430 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1103 20:47:42.671022   98430 command_runner.go:130] > # its default mounts from the following two files:
	I1103 20:47:42.671026   98430 command_runner.go:130] > #
	I1103 20:47:42.671032   98430 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1103 20:47:42.671038   98430 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1103 20:47:42.671043   98430 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1103 20:47:42.671046   98430 command_runner.go:130] > #
	I1103 20:47:42.671052   98430 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1103 20:47:42.671058   98430 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1103 20:47:42.671064   98430 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1103 20:47:42.671069   98430 command_runner.go:130] > #      only add mounts it finds in this file.
	I1103 20:47:42.671072   98430 command_runner.go:130] > #
	I1103 20:47:42.671076   98430 command_runner.go:130] > # default_mounts_file = ""
	I1103 20:47:42.671081   98430 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1103 20:47:42.671087   98430 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1103 20:47:42.671092   98430 command_runner.go:130] > # pids_limit = 0
	I1103 20:47:42.671098   98430 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1103 20:47:42.671104   98430 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1103 20:47:42.671110   98430 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1103 20:47:42.671117   98430 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1103 20:47:42.671121   98430 command_runner.go:130] > # log_size_max = -1
	I1103 20:47:42.671127   98430 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1103 20:47:42.671132   98430 command_runner.go:130] > # log_to_journald = false
	I1103 20:47:42.671138   98430 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1103 20:47:42.671143   98430 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1103 20:47:42.671147   98430 command_runner.go:130] > # Path to directory for container attach sockets.
	I1103 20:47:42.671152   98430 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1103 20:47:42.671157   98430 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1103 20:47:42.671161   98430 command_runner.go:130] > # bind_mount_prefix = ""
	I1103 20:47:42.671166   98430 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1103 20:47:42.671170   98430 command_runner.go:130] > # read_only = false
	I1103 20:47:42.671176   98430 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1103 20:47:42.671182   98430 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1103 20:47:42.671186   98430 command_runner.go:130] > # live configuration reload.
	I1103 20:47:42.671189   98430 command_runner.go:130] > # log_level = "info"
	I1103 20:47:42.671195   98430 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1103 20:47:42.671200   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:47:42.671203   98430 command_runner.go:130] > # log_filter = ""
	I1103 20:47:42.671209   98430 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1103 20:47:42.671214   98430 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1103 20:47:42.671219   98430 command_runner.go:130] > # separated by comma.
	I1103 20:47:42.671223   98430 command_runner.go:130] > # uid_mappings = ""
	I1103 20:47:42.671229   98430 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1103 20:47:42.671234   98430 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1103 20:47:42.671238   98430 command_runner.go:130] > # separated by comma.
	I1103 20:47:42.671242   98430 command_runner.go:130] > # gid_mappings = ""
	I1103 20:47:42.671247   98430 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1103 20:47:42.671253   98430 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1103 20:47:42.671259   98430 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1103 20:47:42.671262   98430 command_runner.go:130] > # minimum_mappable_uid = -1
	I1103 20:47:42.671268   98430 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1103 20:47:42.671274   98430 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1103 20:47:42.671281   98430 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1103 20:47:42.671285   98430 command_runner.go:130] > # minimum_mappable_gid = -1
	I1103 20:47:42.671290   98430 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1103 20:47:42.671296   98430 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1103 20:47:42.671301   98430 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1103 20:47:42.671305   98430 command_runner.go:130] > # ctr_stop_timeout = 30
	I1103 20:47:42.671311   98430 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1103 20:47:42.671331   98430 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1103 20:47:42.671336   98430 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1103 20:47:42.671341   98430 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1103 20:47:42.671345   98430 command_runner.go:130] > # drop_infra_ctr = true
	I1103 20:47:42.671351   98430 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1103 20:47:42.671362   98430 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1103 20:47:42.671368   98430 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1103 20:47:42.671372   98430 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1103 20:47:42.671378   98430 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1103 20:47:42.671383   98430 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1103 20:47:42.671387   98430 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1103 20:47:42.671393   98430 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1103 20:47:42.671397   98430 command_runner.go:130] > # pinns_path = ""
	I1103 20:47:42.671403   98430 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1103 20:47:42.671409   98430 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1103 20:47:42.671414   98430 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1103 20:47:42.671418   98430 command_runner.go:130] > # default_runtime = "runc"
	I1103 20:47:42.671423   98430 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1103 20:47:42.671430   98430 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1103 20:47:42.671438   98430 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1103 20:47:42.671443   98430 command_runner.go:130] > # creation as a file is not desired either.
	I1103 20:47:42.671450   98430 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1103 20:47:42.671455   98430 command_runner.go:130] > # the hostname is being managed dynamically.
	I1103 20:47:42.671459   98430 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1103 20:47:42.671462   98430 command_runner.go:130] > # ]
	I1103 20:47:42.671468   98430 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1103 20:47:42.671474   98430 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1103 20:47:42.671480   98430 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1103 20:47:42.671487   98430 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1103 20:47:42.671490   98430 command_runner.go:130] > #
	I1103 20:47:42.671495   98430 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1103 20:47:42.671500   98430 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1103 20:47:42.671503   98430 command_runner.go:130] > #  runtime_type = "oci"
	I1103 20:47:42.671510   98430 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1103 20:47:42.671515   98430 command_runner.go:130] > #  privileged_without_host_devices = false
	I1103 20:47:42.671520   98430 command_runner.go:130] > #  allowed_annotations = []
	I1103 20:47:42.671523   98430 command_runner.go:130] > # Where:
	I1103 20:47:42.671528   98430 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1103 20:47:42.671534   98430 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1103 20:47:42.671540   98430 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1103 20:47:42.671546   98430 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1103 20:47:42.671550   98430 command_runner.go:130] > #   in $PATH.
	I1103 20:47:42.671555   98430 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1103 20:47:42.671560   98430 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1103 20:47:42.671566   98430 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1103 20:47:42.671570   98430 command_runner.go:130] > #   state.
	I1103 20:47:42.671575   98430 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1103 20:47:42.671581   98430 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1103 20:47:42.671587   98430 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1103 20:47:42.671592   98430 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1103 20:47:42.671598   98430 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1103 20:47:42.671604   98430 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1103 20:47:42.671608   98430 command_runner.go:130] > #   The currently recognized values are:
	I1103 20:47:42.671615   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1103 20:47:42.671622   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1103 20:47:42.671628   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1103 20:47:42.671634   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1103 20:47:42.671641   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1103 20:47:42.671647   98430 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1103 20:47:42.671653   98430 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1103 20:47:42.671659   98430 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1103 20:47:42.671663   98430 command_runner.go:130] > #   should be moved to the container's cgroup
	I1103 20:47:42.671668   98430 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1103 20:47:42.671674   98430 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1103 20:47:42.671677   98430 command_runner.go:130] > runtime_type = "oci"
	I1103 20:47:42.671681   98430 command_runner.go:130] > runtime_root = "/run/runc"
	I1103 20:47:42.671685   98430 command_runner.go:130] > runtime_config_path = ""
	I1103 20:47:42.671689   98430 command_runner.go:130] > monitor_path = ""
	I1103 20:47:42.671693   98430 command_runner.go:130] > monitor_cgroup = ""
	I1103 20:47:42.671697   98430 command_runner.go:130] > monitor_exec_cgroup = ""
	I1103 20:47:42.671732   98430 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1103 20:47:42.671737   98430 command_runner.go:130] > # running containers
	I1103 20:47:42.671741   98430 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1103 20:47:42.671747   98430 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1103 20:47:42.671753   98430 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1103 20:47:42.671759   98430 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1103 20:47:42.671764   98430 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1103 20:47:42.671768   98430 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1103 20:47:42.671772   98430 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1103 20:47:42.671776   98430 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1103 20:47:42.671781   98430 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1103 20:47:42.671785   98430 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1103 20:47:42.671791   98430 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1103 20:47:42.671797   98430 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1103 20:47:42.671803   98430 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1103 20:47:42.671810   98430 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1103 20:47:42.671817   98430 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1103 20:47:42.671822   98430 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1103 20:47:42.671831   98430 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1103 20:47:42.671838   98430 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1103 20:47:42.671843   98430 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1103 20:47:42.671849   98430 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1103 20:47:42.671853   98430 command_runner.go:130] > # Example:
	I1103 20:47:42.671857   98430 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1103 20:47:42.671862   98430 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1103 20:47:42.671866   98430 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1103 20:47:42.671871   98430 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1103 20:47:42.671874   98430 command_runner.go:130] > # cpuset = 0
	I1103 20:47:42.671878   98430 command_runner.go:130] > # cpushares = "0-1"
	I1103 20:47:42.671882   98430 command_runner.go:130] > # Where:
	I1103 20:47:42.671886   98430 command_runner.go:130] > # The workload name is workload-type.
	I1103 20:47:42.671894   98430 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1103 20:47:42.671899   98430 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1103 20:47:42.671904   98430 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1103 20:47:42.671911   98430 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1103 20:47:42.671916   98430 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1103 20:47:42.671919   98430 command_runner.go:130] > # 
	I1103 20:47:42.671926   98430 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1103 20:47:42.671930   98430 command_runner.go:130] > #
	I1103 20:47:42.671935   98430 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1103 20:47:42.671941   98430 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1103 20:47:42.671947   98430 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1103 20:47:42.671953   98430 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1103 20:47:42.671960   98430 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1103 20:47:42.671963   98430 command_runner.go:130] > [crio.image]
	I1103 20:47:42.671969   98430 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1103 20:47:42.671973   98430 command_runner.go:130] > # default_transport = "docker://"
	I1103 20:47:42.671979   98430 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1103 20:47:42.671985   98430 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1103 20:47:42.671989   98430 command_runner.go:130] > # global_auth_file = ""
	I1103 20:47:42.671993   98430 command_runner.go:130] > # The image used to instantiate infra containers.
	I1103 20:47:42.671998   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:47:42.672002   98430 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1103 20:47:42.672008   98430 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1103 20:47:42.672014   98430 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1103 20:47:42.672019   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:47:42.672023   98430 command_runner.go:130] > # pause_image_auth_file = ""
	I1103 20:47:42.672028   98430 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1103 20:47:42.672034   98430 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1103 20:47:42.672040   98430 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1103 20:47:42.672045   98430 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1103 20:47:42.672050   98430 command_runner.go:130] > # pause_command = "/pause"
	I1103 20:47:42.672055   98430 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1103 20:47:42.672061   98430 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1103 20:47:42.672067   98430 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1103 20:47:42.672073   98430 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1103 20:47:42.672079   98430 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1103 20:47:42.672083   98430 command_runner.go:130] > # signature_policy = ""
	I1103 20:47:42.672102   98430 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1103 20:47:42.672108   98430 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1103 20:47:42.672112   98430 command_runner.go:130] > # changing them here.
	I1103 20:47:42.672116   98430 command_runner.go:130] > # insecure_registries = [
	I1103 20:47:42.672119   98430 command_runner.go:130] > # ]
	I1103 20:47:42.672130   98430 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1103 20:47:42.672135   98430 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1103 20:47:42.672139   98430 command_runner.go:130] > # image_volumes = "mkdir"
	I1103 20:47:42.672144   98430 command_runner.go:130] > # Temporary directory to use for storing big files
	I1103 20:47:42.672148   98430 command_runner.go:130] > # big_files_temporary_dir = ""
	I1103 20:47:42.672154   98430 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1103 20:47:42.672158   98430 command_runner.go:130] > # CNI plugins.
	I1103 20:47:42.672161   98430 command_runner.go:130] > [crio.network]
	I1103 20:47:42.672167   98430 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1103 20:47:42.672172   98430 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1103 20:47:42.672176   98430 command_runner.go:130] > # cni_default_network = ""
	I1103 20:47:42.672181   98430 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1103 20:47:42.672186   98430 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1103 20:47:42.672191   98430 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1103 20:47:42.672195   98430 command_runner.go:130] > # plugin_dirs = [
	I1103 20:47:42.672198   98430 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1103 20:47:42.672202   98430 command_runner.go:130] > # ]
	I1103 20:47:42.672207   98430 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1103 20:47:42.672211   98430 command_runner.go:130] > [crio.metrics]
	I1103 20:47:42.672215   98430 command_runner.go:130] > # Globally enable or disable metrics support.
	I1103 20:47:42.672219   98430 command_runner.go:130] > # enable_metrics = false
	I1103 20:47:42.672224   98430 command_runner.go:130] > # Specify enabled metrics collectors.
	I1103 20:47:42.672228   98430 command_runner.go:130] > # Per default all metrics are enabled.
	I1103 20:47:42.672234   98430 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1103 20:47:42.672239   98430 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1103 20:47:42.672245   98430 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1103 20:47:42.672249   98430 command_runner.go:130] > # metrics_collectors = [
	I1103 20:47:42.672252   98430 command_runner.go:130] > # 	"operations",
	I1103 20:47:42.672257   98430 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1103 20:47:42.672261   98430 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1103 20:47:42.672265   98430 command_runner.go:130] > # 	"operations_errors",
	I1103 20:47:42.672269   98430 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1103 20:47:42.672273   98430 command_runner.go:130] > # 	"image_pulls_by_name",
	I1103 20:47:42.672277   98430 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1103 20:47:42.672281   98430 command_runner.go:130] > # 	"image_pulls_failures",
	I1103 20:47:42.672287   98430 command_runner.go:130] > # 	"image_pulls_successes",
	I1103 20:47:42.672292   98430 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1103 20:47:42.672296   98430 command_runner.go:130] > # 	"image_layer_reuse",
	I1103 20:47:42.672299   98430 command_runner.go:130] > # 	"containers_oom_total",
	I1103 20:47:42.672303   98430 command_runner.go:130] > # 	"containers_oom",
	I1103 20:47:42.672307   98430 command_runner.go:130] > # 	"processes_defunct",
	I1103 20:47:42.672311   98430 command_runner.go:130] > # 	"operations_total",
	I1103 20:47:42.672315   98430 command_runner.go:130] > # 	"operations_latency_seconds",
	I1103 20:47:42.672320   98430 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1103 20:47:42.672326   98430 command_runner.go:130] > # 	"operations_errors_total",
	I1103 20:47:42.672330   98430 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1103 20:47:42.672335   98430 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1103 20:47:42.672339   98430 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1103 20:47:42.672343   98430 command_runner.go:130] > # 	"image_pulls_success_total",
	I1103 20:47:42.672347   98430 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1103 20:47:42.672351   98430 command_runner.go:130] > # 	"containers_oom_count_total",
	I1103 20:47:42.672359   98430 command_runner.go:130] > # ]
	I1103 20:47:42.672364   98430 command_runner.go:130] > # The port on which the metrics server will listen.
	I1103 20:47:42.672368   98430 command_runner.go:130] > # metrics_port = 9090
	I1103 20:47:42.672373   98430 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1103 20:47:42.672377   98430 command_runner.go:130] > # metrics_socket = ""
	I1103 20:47:42.672382   98430 command_runner.go:130] > # The certificate for the secure metrics server.
	I1103 20:47:42.672387   98430 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1103 20:47:42.672393   98430 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1103 20:47:42.672398   98430 command_runner.go:130] > # certificate on any modification event.
	I1103 20:47:42.672402   98430 command_runner.go:130] > # metrics_cert = ""
	I1103 20:47:42.672407   98430 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1103 20:47:42.672411   98430 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1103 20:47:42.672415   98430 command_runner.go:130] > # metrics_key = ""
	I1103 20:47:42.672452   98430 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1103 20:47:42.672460   98430 command_runner.go:130] > [crio.tracing]
	I1103 20:47:42.672468   98430 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1103 20:47:42.672475   98430 command_runner.go:130] > # enable_tracing = false
	I1103 20:47:42.672496   98430 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1103 20:47:42.672501   98430 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1103 20:47:42.672506   98430 command_runner.go:130] > # Number of samples to collect per million spans.
	I1103 20:47:42.672511   98430 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1103 20:47:42.672518   98430 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1103 20:47:42.672526   98430 command_runner.go:130] > [crio.stats]
	I1103 20:47:42.672531   98430 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1103 20:47:42.672537   98430 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1103 20:47:42.672544   98430 command_runner.go:130] > # stats_collection_period = 0
	I1103 20:47:42.674290   98430 command_runner.go:130] ! time="2023-11-03 20:47:42.666988477Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1103 20:47:42.674313   98430 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1103 20:47:42.674391   98430 cni.go:84] Creating CNI manager for ""
	I1103 20:47:42.674401   98430 cni.go:136] 1 nodes found, recommending kindnet
	I1103 20:47:42.674418   98430 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1103 20:47:42.674438   98430 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-280480 NodeName:multinode-280480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1103 20:47:42.674567   98430 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-280480"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1103 20:47:42.674619   98430 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-280480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-280480 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1103 20:47:42.674661   98430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1103 20:47:42.681887   98430 command_runner.go:130] > kubeadm
	I1103 20:47:42.681904   98430 command_runner.go:130] > kubectl
	I1103 20:47:42.681909   98430 command_runner.go:130] > kubelet
	I1103 20:47:42.682476   98430 binaries.go:44] Found k8s binaries, skipping transfer
	I1103 20:47:42.682534   98430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1103 20:47:42.690003   98430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1103 20:47:42.705105   98430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1103 20:47:42.720119   98430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1103 20:47:42.734974   98430 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1103 20:47:42.737809   98430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1103 20:47:42.746622   98430 certs.go:56] Setting up /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480 for IP: 192.168.58.2
	I1103 20:47:42.746650   98430 certs.go:190] acquiring lock for shared ca certs: {Name:mk18b7761724bd0081d8ca2b791d44e447ae6553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:47:42.746783   98430 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key
	I1103 20:47:42.746821   98430 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key
	I1103 20:47:42.746877   98430 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.key
	I1103 20:47:42.746894   98430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.crt with IP's: []
	I1103 20:47:42.880516   98430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.crt ...
	I1103 20:47:42.880546   98430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.crt: {Name:mk9fc36052ccc907b7eaab033866ee8000dfe350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:47:42.880708   98430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.key ...
	I1103 20:47:42.880720   98430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.key: {Name:mk3627c2826d69c2335c5f7f466ae5c8a721d51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:47:42.880789   98430 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.key.cee25041
	I1103 20:47:42.880834   98430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1103 20:47:42.992536   98430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.crt.cee25041 ...
	I1103 20:47:42.992563   98430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.crt.cee25041: {Name:mkecfd9f51f8338469509149c78bd6f86900206c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:47:42.992725   98430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.key.cee25041 ...
	I1103 20:47:42.992741   98430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.key.cee25041: {Name:mk6acbeaf7870439885126982f81ef74dbc83bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:47:42.992841   98430 certs.go:337] copying /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.crt
	I1103 20:47:42.992921   98430 certs.go:341] copying /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.key
	I1103 20:47:42.992972   98430 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.key
	I1103 20:47:42.992985   98430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.crt with IP's: []
	I1103 20:47:43.142303   98430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.crt ...
	I1103 20:47:43.142331   98430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.crt: {Name:mk411854f1d03b764cb2dc735f9a924f894df29d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:47:43.142477   98430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.key ...
	I1103 20:47:43.142492   98430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.key: {Name:mk48babe74bc3867041a9cdcd67f2f7e637eecb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:47:43.142553   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1103 20:47:43.142570   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1103 20:47:43.142581   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1103 20:47:43.142593   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1103 20:47:43.142603   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1103 20:47:43.142616   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1103 20:47:43.142629   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1103 20:47:43.142641   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1103 20:47:43.142687   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem (1338 bytes)
	W1103 20:47:43.142723   98430 certs.go:433] ignoring /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887_empty.pem, impossibly tiny 0 bytes
	I1103 20:47:43.142734   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem (1675 bytes)
	I1103 20:47:43.142757   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem (1082 bytes)
	I1103 20:47:43.142779   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem (1123 bytes)
	I1103 20:47:43.142805   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem (1679 bytes)
	I1103 20:47:43.142843   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem (1708 bytes)
	I1103 20:47:43.142872   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem -> /usr/share/ca-certificates/11887.pem
	I1103 20:47:43.142891   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> /usr/share/ca-certificates/118872.pem
	I1103 20:47:43.142903   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:47:43.143429   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1103 20:47:43.165079   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1103 20:47:43.184665   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1103 20:47:43.204311   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1103 20:47:43.224543   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1103 20:47:43.244609   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1103 20:47:43.264857   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1103 20:47:43.284505   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1103 20:47:43.304270   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem --> /usr/share/ca-certificates/11887.pem (1338 bytes)
	I1103 20:47:43.324582   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem --> /usr/share/ca-certificates/118872.pem (1708 bytes)
	I1103 20:47:43.343851   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1103 20:47:43.363006   98430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1103 20:47:43.377623   98430 ssh_runner.go:195] Run: openssl version
	I1103 20:47:43.382455   98430 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1103 20:47:43.382507   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1103 20:47:43.390907   98430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:47:43.393963   98430 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  3 20:29 /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:47:43.393995   98430 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  3 20:29 /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:47:43.394040   98430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:47:43.399860   98430 command_runner.go:130] > b5213941
	I1103 20:47:43.400044   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1103 20:47:43.407931   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11887.pem && ln -fs /usr/share/ca-certificates/11887.pem /etc/ssl/certs/11887.pem"
	I1103 20:47:43.415956   98430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11887.pem
	I1103 20:47:43.418744   98430 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  3 20:35 /usr/share/ca-certificates/11887.pem
	I1103 20:47:43.418772   98430 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  3 20:35 /usr/share/ca-certificates/11887.pem
	I1103 20:47:43.418818   98430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11887.pem
	I1103 20:47:43.424687   98430 command_runner.go:130] > 51391683
	I1103 20:47:43.424859   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11887.pem /etc/ssl/certs/51391683.0"
	I1103 20:47:43.432508   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118872.pem && ln -fs /usr/share/ca-certificates/118872.pem /etc/ssl/certs/118872.pem"
	I1103 20:47:43.440561   98430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118872.pem
	I1103 20:47:43.443593   98430 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  3 20:35 /usr/share/ca-certificates/118872.pem
	I1103 20:47:43.443621   98430 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  3 20:35 /usr/share/ca-certificates/118872.pem
	I1103 20:47:43.443650   98430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118872.pem
	I1103 20:47:43.449418   98430 command_runner.go:130] > 3ec20f2e
	I1103 20:47:43.449482   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118872.pem /etc/ssl/certs/3ec20f2e.0"
	I1103 20:47:43.457222   98430 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1103 20:47:43.459966   98430 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1103 20:47:43.460004   98430 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1103 20:47:43.460036   98430 kubeadm.go:404] StartCluster: {Name:multinode-280480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-280480 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:47:43.460098   98430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1103 20:47:43.460140   98430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1103 20:47:43.493459   98430 cri.go:89] found id: ""
	I1103 20:47:43.493543   98430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1103 20:47:43.500761   98430 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1103 20:47:43.500790   98430 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1103 20:47:43.500802   98430 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1103 20:47:43.501471   98430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1103 20:47:43.509090   98430 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1103 20:47:43.509138   98430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1103 20:47:43.516592   98430 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1103 20:47:43.516621   98430 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1103 20:47:43.516628   98430 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1103 20:47:43.516635   98430 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1103 20:47:43.516662   98430 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1103 20:47:43.516687   98430 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1103 20:47:43.558473   98430 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1103 20:47:43.558507   98430 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1103 20:47:43.558723   98430 kubeadm.go:322] [preflight] Running pre-flight checks
	I1103 20:47:43.558738   98430 command_runner.go:130] > [preflight] Running pre-flight checks
	I1103 20:47:43.593580   98430 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1103 20:47:43.593620   98430 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1103 20:47:43.593715   98430 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1103 20:47:43.593727   98430 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1046-gcp
	I1103 20:47:43.593776   98430 kubeadm.go:322] OS: Linux
	I1103 20:47:43.593787   98430 command_runner.go:130] > OS: Linux
	I1103 20:47:43.593849   98430 kubeadm.go:322] CGROUPS_CPU: enabled
	I1103 20:47:43.593859   98430 command_runner.go:130] > CGROUPS_CPU: enabled
	I1103 20:47:43.593924   98430 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1103 20:47:43.593934   98430 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1103 20:47:43.593991   98430 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1103 20:47:43.594007   98430 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1103 20:47:43.594062   98430 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1103 20:47:43.594070   98430 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1103 20:47:43.594107   98430 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1103 20:47:43.594114   98430 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1103 20:47:43.594151   98430 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1103 20:47:43.594158   98430 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1103 20:47:43.594214   98430 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1103 20:47:43.594227   98430 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1103 20:47:43.594298   98430 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1103 20:47:43.594308   98430 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1103 20:47:43.594367   98430 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1103 20:47:43.594379   98430 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1103 20:47:43.656121   98430 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1103 20:47:43.656151   98430 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1103 20:47:43.656295   98430 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1103 20:47:43.656308   98430 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1103 20:47:43.656414   98430 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1103 20:47:43.656454   98430 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1103 20:47:43.842542   98430 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1103 20:47:43.846117   98430 out.go:204]   - Generating certificates and keys ...
	I1103 20:47:43.842615   98430 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1103 20:47:43.846263   98430 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1103 20:47:43.846290   98430 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1103 20:47:43.846408   98430 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1103 20:47:43.846420   98430 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1103 20:47:44.174061   98430 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1103 20:47:44.174090   98430 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1103 20:47:44.355481   98430 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1103 20:47:44.355512   98430 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1103 20:47:44.528890   98430 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1103 20:47:44.528924   98430 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1103 20:47:44.627537   98430 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1103 20:47:44.627563   98430 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1103 20:47:44.943484   98430 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1103 20:47:44.943517   98430 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1103 20:47:44.943650   98430 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-280480] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1103 20:47:44.943659   98430 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-280480] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1103 20:47:44.992093   98430 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1103 20:47:44.992125   98430 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1103 20:47:44.992255   98430 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-280480] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1103 20:47:44.992266   98430 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-280480] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1103 20:47:45.087181   98430 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1103 20:47:45.087209   98430 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1103 20:47:45.282458   98430 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1103 20:47:45.282487   98430 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1103 20:47:45.506915   98430 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1103 20:47:45.506942   98430 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1103 20:47:45.507071   98430 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1103 20:47:45.507087   98430 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1103 20:47:45.739487   98430 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1103 20:47:45.739534   98430 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1103 20:47:45.796079   98430 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1103 20:47:45.796102   98430 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1103 20:47:45.960661   98430 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1103 20:47:45.960698   98430 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1103 20:47:46.102565   98430 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1103 20:47:46.102594   98430 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1103 20:47:46.103044   98430 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1103 20:47:46.103068   98430 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1103 20:47:46.106024   98430 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1103 20:47:46.108352   98430 out.go:204]   - Booting up control plane ...
	I1103 20:47:46.106110   98430 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1103 20:47:46.108466   98430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1103 20:47:46.108491   98430 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1103 20:47:46.108602   98430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1103 20:47:46.108611   98430 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1103 20:47:46.108677   98430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1103 20:47:46.108686   98430 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1103 20:47:46.116156   98430 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1103 20:47:46.116165   98430 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1103 20:47:46.116922   98430 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1103 20:47:46.116929   98430 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1103 20:47:46.116967   98430 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1103 20:47:46.116978   98430 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1103 20:47:46.187440   98430 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1103 20:47:46.187448   98430 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1103 20:47:50.689266   98430 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501902 seconds
	I1103 20:47:50.689291   98430 command_runner.go:130] > [apiclient] All control plane components are healthy after 4.501902 seconds
	I1103 20:47:50.689460   98430 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1103 20:47:50.689475   98430 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1103 20:47:50.701225   98430 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1103 20:47:50.701249   98430 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1103 20:47:51.220828   98430 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1103 20:47:51.220856   98430 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1103 20:47:51.221082   98430 kubeadm.go:322] [mark-control-plane] Marking the node multinode-280480 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1103 20:47:51.221097   98430 command_runner.go:130] > [mark-control-plane] Marking the node multinode-280480 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1103 20:47:51.729558   98430 kubeadm.go:322] [bootstrap-token] Using token: 4kfqip.d6301sr0rq8jacvf
	I1103 20:47:51.731120   98430 out.go:204]   - Configuring RBAC rules ...
	I1103 20:47:51.729631   98430 command_runner.go:130] > [bootstrap-token] Using token: 4kfqip.d6301sr0rq8jacvf
	I1103 20:47:51.731249   98430 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1103 20:47:51.731264   98430 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1103 20:47:51.734376   98430 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1103 20:47:51.734402   98430 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1103 20:47:51.742032   98430 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1103 20:47:51.742059   98430 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1103 20:47:51.744708   98430 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1103 20:47:51.744726   98430 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1103 20:47:51.747112   98430 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1103 20:47:51.747129   98430 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1103 20:47:51.750449   98430 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1103 20:47:51.750490   98430 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1103 20:47:51.759218   98430 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1103 20:47:51.759244   98430 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1103 20:47:51.968325   98430 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1103 20:47:51.968355   98430 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1103 20:47:52.137552   98430 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1103 20:47:52.137598   98430 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1103 20:47:52.138593   98430 kubeadm.go:322] 
	I1103 20:47:52.138704   98430 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1103 20:47:52.138720   98430 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1103 20:47:52.138731   98430 kubeadm.go:322] 
	I1103 20:47:52.138827   98430 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1103 20:47:52.138848   98430 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1103 20:47:52.138854   98430 kubeadm.go:322] 
	I1103 20:47:52.138888   98430 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1103 20:47:52.138898   98430 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1103 20:47:52.138971   98430 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1103 20:47:52.138981   98430 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1103 20:47:52.139047   98430 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1103 20:47:52.139059   98430 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1103 20:47:52.139065   98430 kubeadm.go:322] 
	I1103 20:47:52.139136   98430 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1103 20:47:52.139146   98430 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1103 20:47:52.139153   98430 kubeadm.go:322] 
	I1103 20:47:52.139218   98430 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1103 20:47:52.139227   98430 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1103 20:47:52.139233   98430 kubeadm.go:322] 
	I1103 20:47:52.139301   98430 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1103 20:47:52.139311   98430 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1103 20:47:52.139405   98430 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1103 20:47:52.139420   98430 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1103 20:47:52.139505   98430 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1103 20:47:52.139515   98430 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1103 20:47:52.139521   98430 kubeadm.go:322] 
	I1103 20:47:52.139614   98430 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1103 20:47:52.139624   98430 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1103 20:47:52.139714   98430 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1103 20:47:52.139724   98430 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1103 20:47:52.139730   98430 kubeadm.go:322] 
	I1103 20:47:52.139823   98430 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4kfqip.d6301sr0rq8jacvf \
	I1103 20:47:52.139833   98430 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4kfqip.d6301sr0rq8jacvf \
	I1103 20:47:52.139943   98430 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df \
	I1103 20:47:52.139953   98430 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df \
	I1103 20:47:52.139980   98430 kubeadm.go:322] 	--control-plane 
	I1103 20:47:52.139990   98430 command_runner.go:130] > 	--control-plane 
	I1103 20:47:52.139995   98430 kubeadm.go:322] 
	I1103 20:47:52.140105   98430 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1103 20:47:52.140116   98430 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1103 20:47:52.140121   98430 kubeadm.go:322] 
	I1103 20:47:52.140224   98430 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4kfqip.d6301sr0rq8jacvf \
	I1103 20:47:52.140234   98430 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4kfqip.d6301sr0rq8jacvf \
	I1103 20:47:52.140352   98430 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df 
	I1103 20:47:52.140363   98430 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df 
	I1103 20:47:52.142581   98430 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1103 20:47:52.142604   98430 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1103 20:47:52.142742   98430 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1103 20:47:52.142768   98430 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1103 20:47:52.142804   98430 cni.go:84] Creating CNI manager for ""
	I1103 20:47:52.142816   98430 cni.go:136] 1 nodes found, recommending kindnet
	I1103 20:47:52.144686   98430 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1103 20:47:52.145982   98430 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1103 20:47:52.189689   98430 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1103 20:47:52.189712   98430 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1103 20:47:52.189722   98430 command_runner.go:130] > Device: 33h/51d	Inode: 544546      Links: 1
	I1103 20:47:52.189732   98430 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1103 20:47:52.189744   98430 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1103 20:47:52.189750   98430 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1103 20:47:52.189755   98430 command_runner.go:130] > Change: 2023-11-03 20:29:19.703825044 +0000
	I1103 20:47:52.189764   98430 command_runner.go:130] >  Birth: 2023-11-03 20:29:19.679822742 +0000
	I1103 20:47:52.189807   98430 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1103 20:47:52.189821   98430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1103 20:47:52.207353   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1103 20:47:52.824148   98430 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1103 20:47:52.828342   98430 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1103 20:47:52.834154   98430 command_runner.go:130] > serviceaccount/kindnet created
	I1103 20:47:52.842769   98430 command_runner.go:130] > daemonset.apps/kindnet created
	I1103 20:47:52.846856   98430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1103 20:47:52.846968   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:52.846997   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=44765b58c8440feed3c9edc110a2d06dc722956e minikube.k8s.io/name=multinode-280480 minikube.k8s.io/updated_at=2023_11_03T20_47_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:52.853565   98430 command_runner.go:130] > -16
	I1103 20:47:52.853712   98430 ops.go:34] apiserver oom_adj: -16
	I1103 20:47:52.929786   98430 command_runner.go:130] > node/multinode-280480 labeled
	I1103 20:47:52.932779   98430 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1103 20:47:52.932895   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:53.002947   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:53.003048   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:53.066374   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:53.567191   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:53.631463   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:54.067040   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:54.127255   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:54.566924   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:54.630420   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:55.066995   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:55.129444   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:55.566609   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:55.626336   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:56.067303   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:56.126152   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:56.567104   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:56.630433   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:57.066948   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:57.131161   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:57.567532   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:57.626831   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:58.067155   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:58.128553   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:58.566736   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:58.628999   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:59.066549   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:59.130148   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:47:59.566705   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:47:59.628004   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:00.067315   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:00.126796   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:00.566610   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:00.630331   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:01.066993   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:01.129687   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:01.567368   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:01.631562   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:02.067190   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:02.129249   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:02.566718   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:02.627127   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:03.067387   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:03.126377   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:03.566571   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:03.629130   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:04.066689   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:04.130625   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:04.567219   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:04.634875   98430 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1103 20:48:05.067465   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1103 20:48:05.191997   98430 command_runner.go:130] > NAME      SECRETS   AGE
	I1103 20:48:05.192029   98430 command_runner.go:130] > default   0         1s
	I1103 20:48:05.192060   98430 kubeadm.go:1081] duration metric: took 12.34515227s to wait for elevateKubeSystemPrivileges.
	I1103 20:48:05.192088   98430 kubeadm.go:406] StartCluster complete in 21.732053671s
	I1103 20:48:05.192111   98430 settings.go:142] acquiring lock: {Name:mk78e85fd384b188b08ef0a94e618db15bb45e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:48:05.192199   98430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:48:05.193212   98430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/kubeconfig: {Name:mk13adb0876366d94fd82a065912fb44eee0cd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:48:05.193655   98430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1103 20:48:05.193764   98430 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1103 20:48:05.194077   98430 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:48:05.194092   98430 addons.go:69] Setting storage-provisioner=true in profile "multinode-280480"
	I1103 20:48:05.194113   98430 addons.go:231] Setting addon storage-provisioner=true in "multinode-280480"
	I1103 20:48:05.193930   98430 config.go:182] Loaded profile config "multinode-280480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:48:05.194172   98430 host.go:66] Checking if "multinode-280480" exists ...
	I1103 20:48:05.194212   98430 addons.go:69] Setting default-storageclass=true in profile "multinode-280480"
	I1103 20:48:05.194234   98430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-280480"
	I1103 20:48:05.194514   98430 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Status}}
	I1103 20:48:05.194523   98430 kapi.go:59] client config for multinode-280480: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.crt", KeyFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.key", CAFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bb20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1103 20:48:05.194675   98430 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Status}}
	I1103 20:48:05.195626   98430 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1103 20:48:05.195648   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:05.195659   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:05.195670   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:05.195770   98430 cert_rotation.go:137] Starting client certificate rotation controller
	I1103 20:48:05.211492   98430 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1103 20:48:05.211511   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:05.211517   98430 round_trippers.go:580]     Audit-Id: 69e15a21-9e04-4f5f-877f-e80e718c980d
	I1103 20:48:05.211522   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:05.211527   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:05.211532   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:05.211537   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:05.211543   98430 round_trippers.go:580]     Content-Length: 291
	I1103 20:48:05.211548   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:05 GMT
	I1103 20:48:05.211568   98430 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a22d9e76-d717-469e-a0fe-24082478dbf0","resourceVersion":"335","creationTimestamp":"2023-11-03T20:47:51Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1103 20:48:05.211891   98430 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a22d9e76-d717-469e-a0fe-24082478dbf0","resourceVersion":"335","creationTimestamp":"2023-11-03T20:47:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1103 20:48:05.211929   98430 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1103 20:48:05.211935   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:05.211943   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:05.211948   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:05.211954   98430 round_trippers.go:473]     Content-Type: application/json
	I1103 20:48:05.218616   98430 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1103 20:48:05.218640   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:05.218653   98430 round_trippers.go:580]     Audit-Id: b3dea0bc-d647-484e-a6e5-51ac750a0ae5
	I1103 20:48:05.218662   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:05.218670   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:05.218676   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:05.218681   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:05.218690   98430 round_trippers.go:580]     Content-Length: 291
	I1103 20:48:05.218698   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:05 GMT
	I1103 20:48:05.218736   98430 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a22d9e76-d717-469e-a0fe-24082478dbf0","resourceVersion":"341","creationTimestamp":"2023-11-03T20:47:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1103 20:48:05.218992   98430 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:48:05.218994   98430 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1103 20:48:05.219047   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:05.219064   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:05.219083   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:05.219300   98430 kapi.go:59] client config for multinode-280480: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.crt", KeyFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.key", CAFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bb20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1103 20:48:05.219660   98430 addons.go:231] Setting addon default-storageclass=true in "multinode-280480"
	I1103 20:48:05.219698   98430 host.go:66] Checking if "multinode-280480" exists ...
	I1103 20:48:05.220225   98430 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Status}}
	I1103 20:48:05.223106   98430 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1103 20:48:05.223123   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:05.223133   98430 round_trippers.go:580]     Content-Length: 291
	I1103 20:48:05.223150   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:05 GMT
	I1103 20:48:05.223158   98430 round_trippers.go:580]     Audit-Id: 65421dfc-d566-473b-896b-ab6055d120ff
	I1103 20:48:05.223168   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:05.223175   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:05.223182   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:05.223189   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:05.223216   98430 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a22d9e76-d717-469e-a0fe-24082478dbf0","resourceVersion":"341","creationTimestamp":"2023-11-03T20:47:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1103 20:48:05.223307   98430 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-280480" context rescaled to 1 replicas
	I1103 20:48:05.223348   98430 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1103 20:48:05.225080   98430 out.go:177] * Verifying Kubernetes components...
	I1103 20:48:05.226920   98430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:48:05.228815   98430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1103 20:48:05.230148   98430 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1103 20:48:05.230169   98430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1103 20:48:05.230222   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:48:05.240878   98430 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1103 20:48:05.240903   98430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1103 20:48:05.240956   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:48:05.248091   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:48:05.257506   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:48:05.412453   98430 command_runner.go:130] > apiVersion: v1
	I1103 20:48:05.412479   98430 command_runner.go:130] > data:
	I1103 20:48:05.412486   98430 command_runner.go:130] >   Corefile: |
	I1103 20:48:05.412493   98430 command_runner.go:130] >     .:53 {
	I1103 20:48:05.412500   98430 command_runner.go:130] >         errors
	I1103 20:48:05.412509   98430 command_runner.go:130] >         health {
	I1103 20:48:05.412517   98430 command_runner.go:130] >            lameduck 5s
	I1103 20:48:05.412523   98430 command_runner.go:130] >         }
	I1103 20:48:05.412529   98430 command_runner.go:130] >         ready
	I1103 20:48:05.412538   98430 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1103 20:48:05.412546   98430 command_runner.go:130] >            pods insecure
	I1103 20:48:05.412560   98430 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1103 20:48:05.412568   98430 command_runner.go:130] >            ttl 30
	I1103 20:48:05.412576   98430 command_runner.go:130] >         }
	I1103 20:48:05.412580   98430 command_runner.go:130] >         prometheus :9153
	I1103 20:48:05.412587   98430 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1103 20:48:05.412592   98430 command_runner.go:130] >            max_concurrent 1000
	I1103 20:48:05.412599   98430 command_runner.go:130] >         }
	I1103 20:48:05.412603   98430 command_runner.go:130] >         cache 30
	I1103 20:48:05.412609   98430 command_runner.go:130] >         loop
	I1103 20:48:05.412619   98430 command_runner.go:130] >         reload
	I1103 20:48:05.412627   98430 command_runner.go:130] >         loadbalance
	I1103 20:48:05.412636   98430 command_runner.go:130] >     }
	I1103 20:48:05.412643   98430 command_runner.go:130] > kind: ConfigMap
	I1103 20:48:05.412651   98430 command_runner.go:130] > metadata:
	I1103 20:48:05.412668   98430 command_runner.go:130] >   creationTimestamp: "2023-11-03T20:47:51Z"
	I1103 20:48:05.412678   98430 command_runner.go:130] >   name: coredns
	I1103 20:48:05.412685   98430 command_runner.go:130] >   namespace: kube-system
	I1103 20:48:05.412696   98430 command_runner.go:130] >   resourceVersion: "228"
	I1103 20:48:05.412703   98430 command_runner.go:130] >   uid: 83c71cdd-c799-4946-9974-04eb0f9919bf
	I1103 20:48:05.416066   98430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1103 20:48:05.416364   98430 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:48:05.416692   98430 kapi.go:59] client config for multinode-280480: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.crt", KeyFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.key", CAFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bb20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1103 20:48:05.417026   98430 node_ready.go:35] waiting up to 6m0s for node "multinode-280480" to be "Ready" ...
	I1103 20:48:05.417132   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:05.417146   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:05.417157   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:05.417167   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:05.419669   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:05.419719   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:05.419744   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:05.419757   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:05 GMT
	I1103 20:48:05.419770   98430 round_trippers.go:580]     Audit-Id: c7b85897-5532-4924-a531-6614b4f934ab
	I1103 20:48:05.419781   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:05.419790   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:05.419799   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:05.419915   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:05.420696   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:05.420722   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:05.420732   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:05.420755   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:05.425797   98430 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1103 20:48:05.425824   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:05.425835   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:05.425844   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:05.425850   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:05.425858   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:05.425866   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:05 GMT
	I1103 20:48:05.425877   98430 round_trippers.go:580]     Audit-Id: 6dd8f622-9e20-4309-8ba3-d326c4fa1fbc
	I1103 20:48:05.426007   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:05.508719   98430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1103 20:48:05.610513   98430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1103 20:48:05.927220   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:05.927238   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:05.927246   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:05.927252   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:05.989858   98430 command_runner.go:130] > configmap/coredns replaced
	I1103 20:48:05.992220   98430 round_trippers.go:574] Response Status: 200 OK in 64 milliseconds
	I1103 20:48:05.992247   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:05.992258   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:05.992268   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:05 GMT
	I1103 20:48:05.992276   98430 round_trippers.go:580]     Audit-Id: 1712b25e-800f-4c07-b82c-921290b01d8e
	I1103 20:48:05.992283   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:05.992290   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:05.992306   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:05.992459   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:05.994213   98430 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1103 20:48:05.994263   98430 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1103 20:48:05.994363   98430 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1103 20:48:05.994381   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:05.994392   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:05.994405   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:05.996396   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:05.996441   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:05.996453   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:05.996462   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:05.996470   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:05.996482   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:05.996495   98430 round_trippers.go:580]     Content-Length: 1273
	I1103 20:48:05.996506   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:05 GMT
	I1103 20:48:05.996517   98430 round_trippers.go:580]     Audit-Id: 7c73f0db-ea71-488d-b19c-1d76b0cfce39
	I1103 20:48:05.996586   98430 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"363"},"items":[{"metadata":{"name":"standard","uid":"544d038b-b9cb-44d2-aa81-f1b2b09e5328","resourceVersion":"362","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1103 20:48:05.997046   98430 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"544d038b-b9cb-44d2-aa81-f1b2b09e5328","resourceVersion":"362","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1103 20:48:05.997101   98430 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1103 20:48:05.997112   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:05.997124   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:05.997136   98430 round_trippers.go:473]     Content-Type: application/json
	I1103 20:48:05.997148   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:05.999575   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:05.999597   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:05.999607   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:05.999616   98430 round_trippers.go:580]     Content-Length: 1220
	I1103 20:48:05.999624   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:05 GMT
	I1103 20:48:05.999636   98430 round_trippers.go:580]     Audit-Id: b6f80689-7364-41a6-b3db-22a5b856d5e1
	I1103 20:48:05.999644   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:05.999660   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:05.999668   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:05.999713   98430 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"544d038b-b9cb-44d2-aa81-f1b2b09e5328","resourceVersion":"362","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1103 20:48:06.165216   98430 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1103 20:48:06.169701   98430 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1103 20:48:06.176724   98430 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1103 20:48:06.182445   98430 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1103 20:48:06.191657   98430 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1103 20:48:06.202006   98430 command_runner.go:130] > pod/storage-provisioner created
	I1103 20:48:06.209122   98430 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1103 20:48:06.210399   98430 addons.go:502] enable addons completed in 1.016635416s: enabled=[default-storageclass storage-provisioner]
	I1103 20:48:06.426793   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:06.426815   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:06.426823   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:06.426830   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:06.429176   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:06.429203   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:06.429214   98430 round_trippers.go:580]     Audit-Id: e16d1204-d7d6-4792-898e-29aacf8140d6
	I1103 20:48:06.429220   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:06.429226   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:06.429233   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:06.429242   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:06.429250   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:06 GMT
	I1103 20:48:06.429438   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:06.926970   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:06.926995   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:06.927006   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:06.927014   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:06.930934   98430 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1103 20:48:06.930961   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:06.930972   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:06.930981   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:06.930990   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:06.930998   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:06 GMT
	I1103 20:48:06.931007   98430 round_trippers.go:580]     Audit-Id: 8b834644-5920-43fb-9fcc-87ef8688f0a9
	I1103 20:48:06.931017   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:06.931157   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:07.426619   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:07.426646   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:07.426656   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:07.426664   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:07.428881   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:07.428901   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:07.428907   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:07.428913   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:07.428918   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:07.428925   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:07.428933   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:07 GMT
	I1103 20:48:07.428940   98430 round_trippers.go:580]     Audit-Id: ce10a8aa-690e-4239-a867-d650a1e04a7b
	I1103 20:48:07.429102   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:07.429454   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:07.926699   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:07.926725   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:07.926738   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:07.926746   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:07.928970   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:07.928988   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:07.928995   98430 round_trippers.go:580]     Audit-Id: 2b2f721e-2723-4465-a259-71070b7f0552
	I1103 20:48:07.929000   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:07.929006   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:07.929011   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:07.929018   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:07.929026   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:07 GMT
	I1103 20:48:07.929148   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:08.426963   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:08.426990   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:08.427001   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:08.427017   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:08.429192   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:08.429212   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:08.429222   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:08.429228   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:08.429241   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:08 GMT
	I1103 20:48:08.429251   98430 round_trippers.go:580]     Audit-Id: 7ab72af0-5309-4898-9a59-f7d4c248fb1e
	I1103 20:48:08.429262   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:08.429269   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:08.429426   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:08.927003   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:08.927027   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:08.927035   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:08.927042   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:08.929341   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:08.929365   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:08.929374   98430 round_trippers.go:580]     Audit-Id: 740e2d86-3763-4920-88b7-d2b1d2f66dbf
	I1103 20:48:08.929381   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:08.929390   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:08.929401   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:08.929413   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:08.929425   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:08 GMT
	I1103 20:48:08.929543   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:09.427012   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:09.427043   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:09.427054   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:09.427060   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:09.429557   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:09.429575   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:09.429582   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:09.429588   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:09.429598   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:09 GMT
	I1103 20:48:09.429606   98430 round_trippers.go:580]     Audit-Id: f92f6987-4c08-4958-ac75-056c489a7b86
	I1103 20:48:09.429617   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:09.429624   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:09.429777   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:09.430111   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:09.927364   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:09.927384   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:09.927392   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:09.927398   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:09.929519   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:09.929537   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:09.929544   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:09.929549   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:09.929556   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:09 GMT
	I1103 20:48:09.929568   98430 round_trippers.go:580]     Audit-Id: 67d7ee08-f0e9-437d-915f-a2df5938bfdc
	I1103 20:48:09.929582   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:09.929593   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:09.929727   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:10.427413   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:10.427434   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:10.427442   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:10.427448   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:10.429640   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:10.429659   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:10.429668   98430 round_trippers.go:580]     Audit-Id: cdef2710-2303-44ee-9bf5-14120527c8ec
	I1103 20:48:10.429676   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:10.429683   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:10.429690   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:10.429698   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:10.429707   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:10 GMT
	I1103 20:48:10.429868   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:10.927584   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:10.927608   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:10.927633   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:10.927639   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:10.929868   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:10.929887   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:10.929894   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:10.929899   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:10 GMT
	I1103 20:48:10.929904   98430 round_trippers.go:580]     Audit-Id: 2ddf0743-4f66-4ec3-b59a-754b58125fed
	I1103 20:48:10.929909   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:10.929914   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:10.929919   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:10.930038   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:11.426605   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:11.426631   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:11.426639   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:11.426644   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:11.428941   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:11.428958   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:11.428967   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:11 GMT
	I1103 20:48:11.428972   98430 round_trippers.go:580]     Audit-Id: 8c21e7ac-4e9e-4d5a-825f-16c38f56a502
	I1103 20:48:11.428978   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:11.428985   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:11.428994   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:11.429004   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:11.429151   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:11.926772   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:11.926800   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:11.926808   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:11.926816   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:11.929104   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:11.929123   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:11.929129   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:11.929135   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:11.929142   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:11.929151   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:11 GMT
	I1103 20:48:11.929167   98430 round_trippers.go:580]     Audit-Id: 98b6dcfc-2149-467e-b5db-a9307967d84a
	I1103 20:48:11.929178   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:11.929345   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:11.929678   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:12.426886   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:12.426910   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:12.426924   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:12.426933   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:12.429306   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:12.429332   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:12.429339   98430 round_trippers.go:580]     Audit-Id: 200f1d49-a60e-4023-b496-8c890391e029
	I1103 20:48:12.429347   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:12.429355   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:12.429365   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:12.429377   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:12.429420   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:12 GMT
	I1103 20:48:12.429709   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:12.927063   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:12.927086   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:12.927098   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:12.927104   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:12.929546   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:12.929570   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:12.929578   98430 round_trippers.go:580]     Audit-Id: debc4c0c-6916-4e65-b09a-3c193fdbb2cc
	I1103 20:48:12.929583   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:12.929589   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:12.929594   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:12.929600   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:12.929605   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:12 GMT
	I1103 20:48:12.929701   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:13.427569   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:13.427589   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:13.427597   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:13.427616   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:13.429776   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:13.429798   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:13.429807   98430 round_trippers.go:580]     Audit-Id: 7201570f-1fd6-4859-811d-7787742597da
	I1103 20:48:13.429815   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:13.429844   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:13.429857   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:13.429867   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:13.429879   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:13 GMT
	I1103 20:48:13.430032   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:13.927594   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:13.927618   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:13.927626   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:13.927633   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:13.929802   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:13.929839   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:13.929851   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:13.929871   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:13.929883   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:13.929892   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:13.929899   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:13 GMT
	I1103 20:48:13.929909   98430 round_trippers.go:580]     Audit-Id: a85ec798-d37a-4158-aef2-bd9d70550516
	I1103 20:48:13.930044   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:13.930437   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:14.427636   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:14.427658   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:14.427666   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:14.427672   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:14.429814   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:14.429836   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:14.429843   98430 round_trippers.go:580]     Audit-Id: 987d679a-de22-4bdc-bf75-6e52c42d8057
	I1103 20:48:14.429848   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:14.429853   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:14.429859   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:14.429866   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:14.429871   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:14 GMT
	I1103 20:48:14.430020   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:14.926598   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:14.926620   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:14.926628   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:14.926634   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:14.928678   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:14.928700   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:14.928709   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:14.928718   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:14 GMT
	I1103 20:48:14.928726   98430 round_trippers.go:580]     Audit-Id: 834a570c-5903-41e1-8feb-b1f2799957b6
	I1103 20:48:14.928737   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:14.928750   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:14.928759   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:14.928897   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:15.427242   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:15.427266   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:15.427277   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:15.427285   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:15.429459   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:15.429485   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:15.429495   98430 round_trippers.go:580]     Audit-Id: 584671a3-f7bf-4f27-a693-0446b1a1cb97
	I1103 20:48:15.429503   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:15.429511   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:15.429518   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:15.429529   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:15.429537   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:15 GMT
	I1103 20:48:15.429809   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:15.927367   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:15.927389   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:15.927398   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:15.927404   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:15.929586   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:15.929614   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:15.929623   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:15 GMT
	I1103 20:48:15.929632   98430 round_trippers.go:580]     Audit-Id: 8cdfcb2e-5fe1-43fd-87f3-323724954b09
	I1103 20:48:15.929641   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:15.929650   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:15.929660   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:15.929670   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:15.929825   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:16.427293   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:16.427316   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:16.427338   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:16.427346   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:16.429589   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:16.429611   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:16.429620   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:16.429628   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:16 GMT
	I1103 20:48:16.429637   98430 round_trippers.go:580]     Audit-Id: 0625b01b-aed2-4162-bb4a-89d068ee29dd
	I1103 20:48:16.429645   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:16.429654   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:16.429667   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:16.429843   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:16.430269   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:16.927016   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:16.927036   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:16.927043   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:16.927050   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:16.930954   98430 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1103 20:48:16.930979   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:16.930989   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:16.930995   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:16.931003   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:16 GMT
	I1103 20:48:16.931010   98430 round_trippers.go:580]     Audit-Id: 860cc557-7f05-4f82-a437-4fa725a655fd
	I1103 20:48:16.931019   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:16.931029   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:16.931147   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:17.426644   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:17.426668   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:17.426676   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:17.426682   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:17.428877   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:17.428898   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:17.428907   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:17.428914   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:17.428922   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:17.428933   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:17 GMT
	I1103 20:48:17.428946   98430 round_trippers.go:580]     Audit-Id: 1aa53795-7f63-4882-93c2-ea74e23dfa5c
	I1103 20:48:17.428956   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:17.429094   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:17.926636   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:17.926660   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:17.926669   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:17.926675   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:17.928877   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:17.928904   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:17.928914   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:17.928924   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:17.928933   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:17.928940   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:17 GMT
	I1103 20:48:17.928949   98430 round_trippers.go:580]     Audit-Id: d992db3b-b4ae-493f-9c3d-83d17719ae60
	I1103 20:48:17.928954   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:17.929097   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:18.426743   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:18.426768   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:18.426776   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:18.426782   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:18.429069   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:18.429088   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:18.429095   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:18.429100   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:18.429106   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:18.429112   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:18 GMT
	I1103 20:48:18.429119   98430 round_trippers.go:580]     Audit-Id: d2163814-64c7-4185-918c-a11a5e426518
	I1103 20:48:18.429127   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:18.429284   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:18.926866   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:18.926897   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:18.926905   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:18.926913   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:18.929069   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:18.929093   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:18.929102   98430 round_trippers.go:580]     Audit-Id: 23fc46ee-6467-4dea-ba65-9a9776421545
	I1103 20:48:18.929110   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:18.929121   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:18.929129   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:18.929136   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:18.929148   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:18 GMT
	I1103 20:48:18.929274   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:18.929611   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:19.426850   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:19.426870   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:19.426878   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:19.426884   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:19.428931   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:19.428961   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:19.428970   98430 round_trippers.go:580]     Audit-Id: f8a62022-acc1-455c-bd79-c7f1eb034d01
	I1103 20:48:19.428978   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:19.428989   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:19.429005   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:19.429015   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:19.429021   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:19 GMT
	I1103 20:48:19.429158   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:19.926658   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:19.926680   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:19.926688   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:19.926695   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:19.928888   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:19.928905   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:19.928912   98430 round_trippers.go:580]     Audit-Id: decb7800-4cb8-4159-8489-a08dfea2b81a
	I1103 20:48:19.928919   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:19.928927   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:19.928946   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:19.928954   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:19.928960   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:19 GMT
	I1103 20:48:19.929067   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:20.426671   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:20.426694   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:20.426702   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:20.426711   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:20.428535   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:20.428554   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:20.428561   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:20.428570   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:20 GMT
	I1103 20:48:20.428579   98430 round_trippers.go:580]     Audit-Id: 8b9c1ad5-8462-458b-a267-6aaea8cd578d
	I1103 20:48:20.428592   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:20.428616   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:20.428626   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:20.428750   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:20.927363   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:20.927385   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:20.927397   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:20.927406   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:20.929835   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:20.929853   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:20.929860   98430 round_trippers.go:580]     Audit-Id: 50be1d0f-fe93-4408-8f43-78df0e72e321
	I1103 20:48:20.929867   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:20.929872   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:20.929877   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:20.929882   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:20.929890   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:20 GMT
	I1103 20:48:20.930025   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:20.930343   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:21.426621   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:21.426644   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:21.426652   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:21.426658   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:21.428747   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:21.428764   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:21.428771   98430 round_trippers.go:580]     Audit-Id: e9371eb8-959f-475c-a117-db8695b93d8e
	I1103 20:48:21.428777   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:21.428782   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:21.428787   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:21.428792   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:21.428800   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:21 GMT
	I1103 20:48:21.428968   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:21.926580   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:21.926606   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:21.926613   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:21.926619   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:21.928857   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:21.928879   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:21.928887   98430 round_trippers.go:580]     Audit-Id: 583b378c-4631-4dab-bc04-c53626ce0866
	I1103 20:48:21.928895   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:21.928903   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:21.928910   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:21.928919   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:21.928933   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:21 GMT
	I1103 20:48:21.929065   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:22.426607   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:22.426629   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:22.426640   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:22.426646   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:22.428597   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:22.428619   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:22.428628   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:22 GMT
	I1103 20:48:22.428635   98430 round_trippers.go:580]     Audit-Id: 013eff86-ab88-40d7-a8c7-941ffc721a28
	I1103 20:48:22.428642   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:22.428649   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:22.428658   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:22.428669   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:22.428810   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:22.927420   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:22.927444   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:22.927452   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:22.927458   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:22.929660   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:22.929680   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:22.929689   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:22.929697   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:22.929709   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:22.929718   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:22.929729   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:22 GMT
	I1103 20:48:22.929739   98430 round_trippers.go:580]     Audit-Id: 65c175bc-eaac-4cf1-a9e2-5eebd03a3835
	I1103 20:48:22.929851   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:23.426647   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:23.426675   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:23.426685   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:23.426693   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:23.428714   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:23.428732   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:23.428739   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:23 GMT
	I1103 20:48:23.428745   98430 round_trippers.go:580]     Audit-Id: e97a2ea1-d714-44dd-b6ae-fa7985854aa4
	I1103 20:48:23.428750   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:23.428755   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:23.428760   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:23.428765   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:23.428927   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:23.429364   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:23.927579   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:23.927599   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:23.927607   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:23.927613   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:23.929729   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:23.929750   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:23.929764   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:23.929769   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:23.929774   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:23 GMT
	I1103 20:48:23.929779   98430 round_trippers.go:580]     Audit-Id: 32ad89ec-235b-4d81-811b-a8d27b9446e8
	I1103 20:48:23.929787   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:23.929795   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:23.929934   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:24.427573   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:24.427596   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:24.427604   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:24.427610   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:24.429843   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:24.429865   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:24.429872   98430 round_trippers.go:580]     Audit-Id: b5c70e42-1415-42b8-bf73-6b0429e644b4
	I1103 20:48:24.429886   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:24.429892   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:24.429898   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:24.429903   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:24.429908   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:24 GMT
	I1103 20:48:24.430096   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:24.926704   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:24.926733   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:24.926741   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:24.926747   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:24.928925   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:24.928942   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:24.928949   98430 round_trippers.go:580]     Audit-Id: 2af1cd45-b084-4868-a1d8-08746aa7d3fa
	I1103 20:48:24.928954   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:24.928959   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:24.928964   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:24.928972   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:24.928980   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:24 GMT
	I1103 20:48:24.929176   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:25.426798   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:25.426823   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:25.426831   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:25.426837   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:25.429171   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:25.429190   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:25.429197   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:25.429202   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:25 GMT
	I1103 20:48:25.429208   98430 round_trippers.go:580]     Audit-Id: 1852c83b-adef-4035-b986-fcb481063dcf
	I1103 20:48:25.429213   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:25.429218   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:25.429222   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:25.429342   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:25.429666   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:25.926857   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:25.926877   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:25.926885   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:25.926891   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:25.929013   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:25.929030   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:25.929037   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:25.929047   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:25 GMT
	I1103 20:48:25.929053   98430 round_trippers.go:580]     Audit-Id: ee1287d8-4618-4d32-8350-0540702eef0e
	I1103 20:48:25.929058   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:25.929063   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:25.929068   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:25.929233   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:26.426805   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:26.426827   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:26.426834   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:26.426841   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:26.430269   98430 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1103 20:48:26.430294   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:26.430304   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:26.430313   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:26.430322   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:26 GMT
	I1103 20:48:26.430330   98430 round_trippers.go:580]     Audit-Id: 4297b4a7-18d3-4fde-82aa-b075e1c9309b
	I1103 20:48:26.430338   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:26.430347   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:26.430556   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:26.926753   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:26.926777   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:26.926785   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:26.926791   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:26.931225   98430 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1103 20:48:26.931249   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:26.931255   98430 round_trippers.go:580]     Audit-Id: 1e2e40d1-83ee-4365-b101-a8abe56293b4
	I1103 20:48:26.931260   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:26.931265   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:26.931270   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:26.931276   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:26.931281   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:26 GMT
	I1103 20:48:26.931459   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:27.426991   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:27.427015   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:27.427022   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:27.427047   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:27.429220   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:27.429242   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:27.429249   98430 round_trippers.go:580]     Audit-Id: df182f47-95e3-49e5-acb1-9f653b075836
	I1103 20:48:27.429254   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:27.429259   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:27.429264   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:27.429269   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:27.429274   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:27 GMT
	I1103 20:48:27.429444   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:27.429966   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:27.927011   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:27.927031   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:27.927039   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:27.927045   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:27.929243   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:27.929265   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:27.929277   98430 round_trippers.go:580]     Audit-Id: c5da7f57-01d2-4732-863e-6fe905d38c52
	I1103 20:48:27.929283   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:27.929288   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:27.929293   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:27.929299   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:27.929315   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:27 GMT
	I1103 20:48:27.929494   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:28.427138   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:28.427166   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:28.427176   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:28.427184   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:28.429378   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:28.429397   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:28.429404   98430 round_trippers.go:580]     Audit-Id: a6acd9ef-c20e-479f-a026-2eb303d585b7
	I1103 20:48:28.429409   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:28.429414   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:28.429419   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:28.429427   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:28.429432   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:28 GMT
	I1103 20:48:28.429673   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:28.927370   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:28.927398   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:28.927407   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:28.927418   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:28.929636   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:28.929655   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:28.929662   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:28.929668   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:28 GMT
	I1103 20:48:28.929673   98430 round_trippers.go:580]     Audit-Id: 5de50843-5b40-4519-a44d-7d00e9548455
	I1103 20:48:28.929678   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:28.929683   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:28.929688   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:28.929801   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:29.427431   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:29.427459   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:29.427471   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:29.427480   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:29.429721   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:29.429743   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:29.429750   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:29 GMT
	I1103 20:48:29.429755   98430 round_trippers.go:580]     Audit-Id: a26a87c6-5027-4a0b-96df-841253f72853
	I1103 20:48:29.429761   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:29.429766   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:29.429771   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:29.429777   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:29.429979   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:29.430418   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:29.927487   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:29.927508   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:29.927516   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:29.927525   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:29.929791   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:29.929812   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:29.929823   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:29.929831   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:29 GMT
	I1103 20:48:29.929838   98430 round_trippers.go:580]     Audit-Id: 3ea787af-0d76-4980-a1d9-b5310e6196d5
	I1103 20:48:29.929852   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:29.929860   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:29.929869   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:29.930015   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:30.427615   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:30.427637   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:30.427645   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:30.427651   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:30.429859   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:30.429881   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:30.429892   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:30.429922   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:30.429934   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:30.429943   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:30 GMT
	I1103 20:48:30.429951   98430 round_trippers.go:580]     Audit-Id: 8e6fd0ca-d7b7-429c-8f5b-2bb83ed5ad81
	I1103 20:48:30.429956   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:30.430109   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:30.927633   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:30.927664   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:30.927672   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:30.927678   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:30.929975   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:30.930001   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:30.930011   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:30.930018   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:30.930024   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:30 GMT
	I1103 20:48:30.930029   98430 round_trippers.go:580]     Audit-Id: ef7f9f19-53bf-4ffa-85b3-38379bf4050a
	I1103 20:48:30.930035   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:30.930050   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:30.930189   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:31.426721   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:31.426745   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:31.426753   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:31.426760   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:31.428911   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:31.428936   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:31.428945   98430 round_trippers.go:580]     Audit-Id: d19aaa71-5807-4667-858e-f0b592e35595
	I1103 20:48:31.428953   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:31.428961   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:31.428969   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:31.428977   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:31.428989   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:31 GMT
	I1103 20:48:31.429173   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:31.926662   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:31.926692   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:31.926703   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:31.926712   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:31.928855   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:31.928876   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:31.928885   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:31.928894   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:31 GMT
	I1103 20:48:31.928900   98430 round_trippers.go:580]     Audit-Id: a2977982-af80-4678-875a-1be165d06be0
	I1103 20:48:31.928907   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:31.928914   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:31.928922   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:31.929077   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:31.929394   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:32.426615   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:32.426644   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:32.426653   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:32.426659   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:32.428897   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:32.428915   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:32.428922   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:32 GMT
	I1103 20:48:32.428927   98430 round_trippers.go:580]     Audit-Id: 5c5c99c6-c82c-42e5-a017-e814b7743e9c
	I1103 20:48:32.428933   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:32.428938   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:32.428944   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:32.428952   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:32.429158   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:32.926689   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:32.926711   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:32.926719   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:32.926725   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:32.928868   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:32.928887   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:32.928893   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:32.928898   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:32.928903   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:32.928908   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:32.928913   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:32 GMT
	I1103 20:48:32.928918   98430 round_trippers.go:580]     Audit-Id: 6169c4f8-6d3d-44fc-9457-25c524936925
	I1103 20:48:32.929064   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:33.427065   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:33.427084   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:33.427092   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:33.427098   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:33.429263   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:33.429290   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:33.429301   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:33.429310   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:33.429320   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:33.429336   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:33 GMT
	I1103 20:48:33.429345   98430 round_trippers.go:580]     Audit-Id: 5900f051-1c6f-45ad-a87c-6218a27e455f
	I1103 20:48:33.429359   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:33.429492   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:33.926895   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:33.926916   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:33.926924   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:33.926929   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:33.928959   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:33.928982   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:33.928993   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:33.929002   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:33.929011   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:33 GMT
	I1103 20:48:33.929021   98430 round_trippers.go:580]     Audit-Id: fc2d3c27-cb98-4f4b-b6df-4491df245674
	I1103 20:48:33.929031   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:33.929041   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:33.929161   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:33.929471   98430 node_ready.go:58] node "multinode-280480" has status "Ready":"False"
	I1103 20:48:34.426672   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:34.426691   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:34.426699   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:34.426705   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:34.428799   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:34.428818   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:34.428827   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:34.428836   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:34.428844   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:34 GMT
	I1103 20:48:34.428852   98430 round_trippers.go:580]     Audit-Id: 3c2fde20-b8f2-4d65-a063-d945746cf664
	I1103 20:48:34.428861   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:34.428873   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:34.429009   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:34.927611   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:34.927640   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:34.927649   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:34.927655   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:34.930002   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:34.930023   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:34.930029   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:34.930035   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:34.930040   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:34.930045   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:34.930052   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:34 GMT
	I1103 20:48:34.930058   98430 round_trippers.go:580]     Audit-Id: 8ddad896-9361-4f8c-93fa-ca7471f2e035
	I1103 20:48:34.930186   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:35.426779   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:35.426821   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:35.426835   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:35.426845   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:35.428899   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:35.428918   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:35.428924   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:35.428929   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:35.428934   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:35.428939   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:35 GMT
	I1103 20:48:35.428945   98430 round_trippers.go:580]     Audit-Id: 6ec927ea-02ab-4447-860a-121775eae941
	I1103 20:48:35.428950   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:35.429080   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:35.926664   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:35.926686   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:35.926694   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:35.926700   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:35.928933   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:35.928963   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:35.928973   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:35.928981   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:35.928988   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:35.928997   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:35 GMT
	I1103 20:48:35.929006   98430 round_trippers.go:580]     Audit-Id: 249e01dc-8d27-4c98-bdc6-c34ed34a76f1
	I1103 20:48:35.929014   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:35.929103   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"317","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 6148 chars]
	I1103 20:48:36.426698   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:36.426720   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:36.426728   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:36.426734   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:36.428461   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:36.428485   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:36.428495   98430 round_trippers.go:580]     Audit-Id: 9b2337a5-a825-4626-982c-64e29346de9c
	I1103 20:48:36.428505   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:36.428518   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:36.428530   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:36.428540   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:36.428545   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:36 GMT
	I1103 20:48:36.428675   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:36.428990   98430 node_ready.go:49] node "multinode-280480" has status "Ready":"True"
	I1103 20:48:36.429005   98430 node_ready.go:38] duration metric: took 31.011940052s waiting for node "multinode-280480" to be "Ready" ...
	I1103 20:48:36.429013   98430 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1103 20:48:36.429105   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1103 20:48:36.429118   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:36.429124   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:36.429130   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:36.431581   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:36.431597   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:36.431603   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:36.431609   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:36.431614   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:36 GMT
	I1103 20:48:36.431620   98430 round_trippers.go:580]     Audit-Id: 629d3c98-2243-4383-9342-0939f2165772
	I1103 20:48:36.431628   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:36.431640   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:36.432158   98430 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"392","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I1103 20:48:36.436380   98430 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rxqxb" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:36.436462   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rxqxb
	I1103 20:48:36.436475   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:36.436484   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:36.436492   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:36.438073   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:36.438087   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:36.438094   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:36.438100   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:36.438108   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:36.438116   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:36.438124   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:36 GMT
	I1103 20:48:36.438132   98430 round_trippers.go:580]     Audit-Id: 2153a580-d385-45f2-a8c4-23e9eafcd26f
	I1103 20:48:36.438280   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"392","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1103 20:48:36.438761   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:36.438779   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:36.438788   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:36.438794   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:36.440477   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:36.440493   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:36.440502   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:36.440510   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:36.440518   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:36 GMT
	I1103 20:48:36.440533   98430 round_trippers.go:580]     Audit-Id: b7b72b71-81d9-4fe0-b038-743dc8cefb44
	I1103 20:48:36.440546   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:36.440558   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:36.440745   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:36.441110   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rxqxb
	I1103 20:48:36.441122   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:36.441130   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:36.441135   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:36.442666   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:36.442683   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:36.442689   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:36 GMT
	I1103 20:48:36.442695   98430 round_trippers.go:580]     Audit-Id: 70292216-a04f-4e4d-840c-c37352a3e978
	I1103 20:48:36.442700   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:36.442705   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:36.442710   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:36.442718   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:36.442827   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"392","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1103 20:48:36.443237   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:36.443250   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:36.443257   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:36.443264   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:36.444743   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:36.444758   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:36.444764   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:36.444770   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:36 GMT
	I1103 20:48:36.444775   98430 round_trippers.go:580]     Audit-Id: 592fa6c3-ad2c-4350-a43a-8636d9372dd4
	I1103 20:48:36.444783   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:36.444793   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:36.444806   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:36.444939   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:36.945665   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rxqxb
	I1103 20:48:36.945686   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:36.945694   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:36.945700   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:36.948087   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:36.948111   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:36.948120   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:36.948128   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:36 GMT
	I1103 20:48:36.948137   98430 round_trippers.go:580]     Audit-Id: 83aab46f-d852-447e-8cdf-4dce432df1c0
	I1103 20:48:36.948145   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:36.948153   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:36.948166   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:36.948273   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"392","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1103 20:48:36.948712   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:36.948724   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:36.948732   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:36.948737   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:36.950609   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:36.950627   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:36.950637   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:36.950645   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:36 GMT
	I1103 20:48:36.950655   98430 round_trippers.go:580]     Audit-Id: dfe01b02-19ff-4da0-a65e-f105c6c04151
	I1103 20:48:36.950664   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:36.950670   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:36.950675   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:36.950814   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:37.445420   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rxqxb
	I1103 20:48:37.445441   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.445449   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.445455   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.447493   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:37.447519   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.447525   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.447530   98430 round_trippers.go:580]     Audit-Id: ca2b35f9-f507-42eb-82a3-e50ebffe75df
	I1103 20:48:37.447536   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.447540   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.447545   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.447551   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.447714   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"403","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1103 20:48:37.448262   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:37.448280   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.448291   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.448301   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.450212   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:37.450233   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.450241   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.450246   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.450251   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.450256   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.450263   98430 round_trippers.go:580]     Audit-Id: fa50d01f-6199-4479-ab4c-9eb3c52ae475
	I1103 20:48:37.450271   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.450409   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:37.450795   98430 pod_ready.go:92] pod "coredns-5dd5756b68-rxqxb" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:37.450813   98430 pod_ready.go:81] duration metric: took 1.014412898s waiting for pod "coredns-5dd5756b68-rxqxb" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:37.450822   98430 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:37.450880   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-280480
	I1103 20:48:37.450891   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.450902   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.450920   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.452658   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:37.452677   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.452687   98430 round_trippers.go:580]     Audit-Id: 7eadd1cb-c9a2-4924-aab2-9408337d1ddb
	I1103 20:48:37.452696   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.452704   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.452713   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.452725   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.452737   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.452818   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-280480","namespace":"kube-system","uid":"064baf76-3464-4729-ac2b-cd0fa19b7914","resourceVersion":"279","creationTimestamp":"2023-11-03T20:47:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2934b8bf856873a89bdd628d2cb9fe01","kubernetes.io/config.mirror":"2934b8bf856873a89bdd628d2cb9fe01","kubernetes.io/config.seen":"2023-11-03T20:47:52.011468428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:47:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1103 20:48:37.453137   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:37.453147   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.453154   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.453160   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.454963   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:37.454983   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.454990   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.454995   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.455000   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.455005   98430 round_trippers.go:580]     Audit-Id: 9ff6e2e6-1141-4936-be63-337c77dc7c79
	I1103 20:48:37.455049   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.455064   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.455161   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:37.455418   98430 pod_ready.go:92] pod "etcd-multinode-280480" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:37.455431   98430 pod_ready.go:81] duration metric: took 4.60242ms waiting for pod "etcd-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:37.455443   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:37.455490   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-280480
	I1103 20:48:37.455497   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.455504   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.455510   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.457223   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:37.457240   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.457246   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.457252   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.457257   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.457262   98430 round_trippers.go:580]     Audit-Id: 48ef1d30-c4a6-4486-9f76-8dd7f0d2380b
	I1103 20:48:37.457267   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.457273   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.457466   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-280480","namespace":"kube-system","uid":"6f42eff1-84c4-40a2-a107-c04dcc981ab2","resourceVersion":"278","creationTimestamp":"2023-11-03T20:47:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"0759901116a5c84d9728f196af5ff715","kubernetes.io/config.mirror":"0759901116a5c84d9728f196af5ff715","kubernetes.io/config.seen":"2023-11-03T20:47:52.011472688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:47:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1103 20:48:37.457839   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:37.457853   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.457860   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.457866   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.459423   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:37.459442   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.459452   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.459461   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.459469   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.459476   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.459488   98430 round_trippers.go:580]     Audit-Id: 32ca3ff0-2c3c-475d-bd77-43d15aa003f0
	I1103 20:48:37.459501   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.459631   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:37.459914   98430 pod_ready.go:92] pod "kube-apiserver-multinode-280480" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:37.459928   98430 pod_ready.go:81] duration metric: took 4.478569ms waiting for pod "kube-apiserver-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:37.459936   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:37.459976   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-280480
	I1103 20:48:37.459980   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.459991   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.459999   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.461489   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:37.461531   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.461546   98430 round_trippers.go:580]     Audit-Id: ffa85233-8090-4fd3-90cd-c01f0bb23205
	I1103 20:48:37.461566   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.461574   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.461585   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.461596   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.461605   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.461733   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-280480","namespace":"kube-system","uid":"04b47790-633d-4d65-8791-33dd357dec71","resourceVersion":"283","creationTimestamp":"2023-11-03T20:47:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7a398ee5e88ea41c5429b679b15d57c9","kubernetes.io/config.mirror":"7a398ee5e88ea41c5429b679b15d57c9","kubernetes.io/config.seen":"2023-11-03T20:47:46.573400973Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:47:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1103 20:48:37.627444   98430 request.go:629] Waited for 165.351895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:37.627514   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:37.627519   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.627527   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.627534   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.629647   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:37.629666   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.629676   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.629685   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.629694   98430 round_trippers.go:580]     Audit-Id: ac82a3a0-3cfb-4f5a-8a1c-a00e875d5d8b
	I1103 20:48:37.629704   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.629712   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.629727   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.629819   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:37.630105   98430 pod_ready.go:92] pod "kube-controller-manager-multinode-280480" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:37.630118   98430 pod_ready.go:81] duration metric: took 170.176025ms waiting for pod "kube-controller-manager-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:37.630128   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lsfmj" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:37.827558   98430 request.go:629] Waited for 197.36656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsfmj
	I1103 20:48:37.827621   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsfmj
	I1103 20:48:37.827629   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:37.827637   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:37.827648   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:37.830088   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:37.830104   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:37.830110   98430 round_trippers.go:580]     Audit-Id: 047b3d9f-682b-4ab2-8b09-a1472aa2861a
	I1103 20:48:37.830116   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:37.830127   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:37.830136   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:37.830146   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:37.830158   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:37 GMT
	I1103 20:48:37.830291   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lsfmj","generateName":"kube-proxy-","namespace":"kube-system","uid":"09340714-82ee-4eb4-9884-b262fa594650","resourceVersion":"364","creationTimestamp":"2023-11-03T20:48:04Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2bb28b36-9e85-4ebb-b884-5447612fba2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2bb28b36-9e85-4ebb-b884-5447612fba2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1103 20:48:38.027034   98430 request.go:629] Waited for 196.353748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:38.027105   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:38.027110   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:38.027118   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:38.027127   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:38.029355   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:38.029372   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:38.029379   98430 round_trippers.go:580]     Audit-Id: 332959c9-87af-4f01-875f-03087d0a6d54
	I1103 20:48:38.029384   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:38.029389   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:38.029394   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:38.029399   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:38.029404   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:38 GMT
	I1103 20:48:38.029552   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:38.029860   98430 pod_ready.go:92] pod "kube-proxy-lsfmj" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:38.029874   98430 pod_ready.go:81] duration metric: took 399.740526ms waiting for pod "kube-proxy-lsfmj" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:38.029883   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:38.227026   98430 request.go:629] Waited for 197.086691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-280480
	I1103 20:48:38.227094   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-280480
	I1103 20:48:38.227099   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:38.227106   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:38.227112   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:38.229356   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:38.229384   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:38.229396   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:38.229404   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:38.229413   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:38.229428   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:38.229438   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:38 GMT
	I1103 20:48:38.229447   98430 round_trippers.go:580]     Audit-Id: 1835fd5e-01aa-48e5-9501-ef05405ac7ff
	I1103 20:48:38.229589   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-280480","namespace":"kube-system","uid":"939a5f81-e4a2-4840-b9a4-e2636be8b7cb","resourceVersion":"287","creationTimestamp":"2023-11-03T20:47:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d5473c5ead1ca7dc4d20c46b31b7dc2","kubernetes.io/config.mirror":"3d5473c5ead1ca7dc4d20c46b31b7dc2","kubernetes.io/config.seen":"2023-11-03T20:47:46.573393114Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:47:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1103 20:48:38.427343   98430 request.go:629] Waited for 197.340532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:38.427405   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:38.427413   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:38.427421   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:38.427430   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:38.429241   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:38.429257   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:38.429264   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:38.429269   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:38 GMT
	I1103 20:48:38.429276   98430 round_trippers.go:580]     Audit-Id: b4209bca-528b-42c4-b401-ba1f6b29d722
	I1103 20:48:38.429286   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:38.429296   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:38.429308   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:38.429422   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:38.429823   98430 pod_ready.go:92] pod "kube-scheduler-multinode-280480" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:38.429846   98430 pod_ready.go:81] duration metric: took 399.956742ms waiting for pod "kube-scheduler-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:38.429859   98430 pod_ready.go:38] duration metric: took 2.000832534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1103 20:48:38.429882   98430 api_server.go:52] waiting for apiserver process to appear ...
	I1103 20:48:38.430013   98430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1103 20:48:38.439482   98430 command_runner.go:130] > 1428
	I1103 20:48:38.440338   98430 api_server.go:72] duration metric: took 33.216961499s to wait for apiserver process to appear ...
	I1103 20:48:38.440358   98430 api_server.go:88] waiting for apiserver healthz status ...
	I1103 20:48:38.440373   98430 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1103 20:48:38.444229   98430 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1103 20:48:38.444279   98430 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1103 20:48:38.444287   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:38.444294   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:38.444300   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:38.445181   98430 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1103 20:48:38.445194   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:38.445201   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:38.445206   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:38.445212   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:38.445226   98430 round_trippers.go:580]     Content-Length: 264
	I1103 20:48:38.445235   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:38 GMT
	I1103 20:48:38.445241   98430 round_trippers.go:580]     Audit-Id: 645c00d8-9622-4097-ba25-96e26ed8ea22
	I1103 20:48:38.445250   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:38.445274   98430 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1103 20:48:38.445367   98430 api_server.go:141] control plane version: v1.28.3
	I1103 20:48:38.445384   98430 api_server.go:131] duration metric: took 5.020035ms to wait for apiserver health ...
	I1103 20:48:38.445394   98430 system_pods.go:43] waiting for kube-system pods to appear ...
	I1103 20:48:38.626710   98430 request.go:629] Waited for 181.253863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1103 20:48:38.626779   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1103 20:48:38.626786   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:38.626797   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:38.626811   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:38.629838   98430 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1103 20:48:38.629863   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:38.629873   98430 round_trippers.go:580]     Audit-Id: 987b91bc-e03f-4383-8465-4b7165688e5c
	I1103 20:48:38.629882   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:38.629890   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:38.629898   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:38.629907   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:38.629924   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:38 GMT
	I1103 20:48:38.630473   98430 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"403","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1103 20:48:38.632185   98430 system_pods.go:59] 8 kube-system pods found
	I1103 20:48:38.632209   98430 system_pods.go:61] "coredns-5dd5756b68-rxqxb" [c6417a12-b154-42c3-ac95-a45396156b0e] Running
	I1103 20:48:38.632214   98430 system_pods.go:61] "etcd-multinode-280480" [064baf76-3464-4729-ac2b-cd0fa19b7914] Running
	I1103 20:48:38.632222   98430 system_pods.go:61] "kindnet-4khv5" [275c32e9-1923-43d6-8f29-fb7afd49891f] Running
	I1103 20:48:38.632235   98430 system_pods.go:61] "kube-apiserver-multinode-280480" [6f42eff1-84c4-40a2-a107-c04dcc981ab2] Running
	I1103 20:48:38.632248   98430 system_pods.go:61] "kube-controller-manager-multinode-280480" [04b47790-633d-4d65-8791-33dd357dec71] Running
	I1103 20:48:38.632260   98430 system_pods.go:61] "kube-proxy-lsfmj" [09340714-82ee-4eb4-9884-b262fa594650] Running
	I1103 20:48:38.632268   98430 system_pods.go:61] "kube-scheduler-multinode-280480" [939a5f81-e4a2-4840-b9a4-e2636be8b7cb] Running
	I1103 20:48:38.632272   98430 system_pods.go:61] "storage-provisioner" [1874c901-a5b0-41a8-922c-94cb29090e3e] Running
	I1103 20:48:38.632281   98430 system_pods.go:74] duration metric: took 186.881333ms to wait for pod list to return data ...
	I1103 20:48:38.632291   98430 default_sa.go:34] waiting for default service account to be created ...
	I1103 20:48:38.827748   98430 request.go:629] Waited for 195.384599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1103 20:48:38.827818   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1103 20:48:38.827825   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:38.827836   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:38.827849   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:38.830005   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:38.830023   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:38.830030   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:38.830037   98430 round_trippers.go:580]     Content-Length: 261
	I1103 20:48:38.830045   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:38 GMT
	I1103 20:48:38.830057   98430 round_trippers.go:580]     Audit-Id: 8e0e4ae6-88b1-4a90-b880-8a30aeb6d87b
	I1103 20:48:38.830069   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:38.830079   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:38.830086   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:38.830104   98430 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"915567cf-0727-491a-8b80-e9ea5766d4c2","resourceVersion":"326","creationTimestamp":"2023-11-03T20:48:04Z"}}]}
	I1103 20:48:38.830286   98430 default_sa.go:45] found service account: "default"
	I1103 20:48:38.830304   98430 default_sa.go:55] duration metric: took 198.004847ms for default service account to be created ...
	I1103 20:48:38.830313   98430 system_pods.go:116] waiting for k8s-apps to be running ...
	I1103 20:48:39.027752   98430 request.go:629] Waited for 197.349474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1103 20:48:39.027818   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1103 20:48:39.027832   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:39.027844   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:39.027858   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:39.031146   98430 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1103 20:48:39.031167   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:39.031174   98430 round_trippers.go:580]     Audit-Id: b3d2fe63-14d3-45ad-8316-d246b63b6b07
	I1103 20:48:39.031180   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:39.031190   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:39.031203   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:39.031211   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:39.031225   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:39 GMT
	I1103 20:48:39.031643   98430 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"403","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1103 20:48:39.033360   98430 system_pods.go:86] 8 kube-system pods found
	I1103 20:48:39.033384   98430 system_pods.go:89] "coredns-5dd5756b68-rxqxb" [c6417a12-b154-42c3-ac95-a45396156b0e] Running
	I1103 20:48:39.033390   98430 system_pods.go:89] "etcd-multinode-280480" [064baf76-3464-4729-ac2b-cd0fa19b7914] Running
	I1103 20:48:39.033398   98430 system_pods.go:89] "kindnet-4khv5" [275c32e9-1923-43d6-8f29-fb7afd49891f] Running
	I1103 20:48:39.033410   98430 system_pods.go:89] "kube-apiserver-multinode-280480" [6f42eff1-84c4-40a2-a107-c04dcc981ab2] Running
	I1103 20:48:39.033420   98430 system_pods.go:89] "kube-controller-manager-multinode-280480" [04b47790-633d-4d65-8791-33dd357dec71] Running
	I1103 20:48:39.033432   98430 system_pods.go:89] "kube-proxy-lsfmj" [09340714-82ee-4eb4-9884-b262fa594650] Running
	I1103 20:48:39.033442   98430 system_pods.go:89] "kube-scheduler-multinode-280480" [939a5f81-e4a2-4840-b9a4-e2636be8b7cb] Running
	I1103 20:48:39.033450   98430 system_pods.go:89] "storage-provisioner" [1874c901-a5b0-41a8-922c-94cb29090e3e] Running
	I1103 20:48:39.033457   98430 system_pods.go:126] duration metric: took 203.13873ms to wait for k8s-apps to be running ...
	I1103 20:48:39.033466   98430 system_svc.go:44] waiting for kubelet service to be running ....
	I1103 20:48:39.033523   98430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:48:39.044321   98430 system_svc.go:56] duration metric: took 10.846394ms WaitForService to wait for kubelet.
	I1103 20:48:39.044345   98430 kubeadm.go:581] duration metric: took 33.820970831s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1103 20:48:39.044362   98430 node_conditions.go:102] verifying NodePressure condition ...
	I1103 20:48:39.226706   98430 request.go:629] Waited for 182.27766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1103 20:48:39.226784   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1103 20:48:39.226794   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:39.226801   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:39.226808   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:39.229154   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:39.229173   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:39.229180   98430 round_trippers.go:580]     Audit-Id: 3ab0aad1-99f1-4609-879f-53d873e066ed
	I1103 20:48:39.229185   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:39.229190   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:39.229196   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:39.229205   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:39.229217   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:39 GMT
	I1103 20:48:39.229387   98430 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 6007 chars]
	I1103 20:48:39.229764   98430 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1103 20:48:39.229784   98430 node_conditions.go:123] node cpu capacity is 8
	I1103 20:48:39.229800   98430 node_conditions.go:105] duration metric: took 185.433732ms to run NodePressure ...
	I1103 20:48:39.229814   98430 start.go:228] waiting for startup goroutines ...
	I1103 20:48:39.229832   98430 start.go:233] waiting for cluster config update ...
	I1103 20:48:39.229845   98430 start.go:242] writing updated cluster config ...
	I1103 20:48:39.232352   98430 out.go:177] 
	I1103 20:48:39.233888   98430 config.go:182] Loaded profile config "multinode-280480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:48:39.233972   98430 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/config.json ...
	I1103 20:48:39.235872   98430 out.go:177] * Starting worker node multinode-280480-m02 in cluster multinode-280480
	I1103 20:48:39.237819   98430 cache.go:121] Beginning downloading kic base image for docker with crio
	I1103 20:48:39.239300   98430 out.go:177] * Pulling base image ...
	I1103 20:48:39.240788   98430 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:48:39.240818   98430 cache.go:56] Caching tarball of preloaded images
	I1103 20:48:39.240903   98430 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon
	I1103 20:48:39.240926   98430 preload.go:174] Found /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1103 20:48:39.240936   98430 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1103 20:48:39.240998   98430 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/config.json ...
	I1103 20:48:39.256935   98430 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon, skipping pull
	I1103 20:48:39.256956   98430 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 exists in daemon, skipping load
	I1103 20:48:39.256974   98430 cache.go:194] Successfully downloaded all kic artifacts
	I1103 20:48:39.257005   98430 start.go:365] acquiring machines lock for multinode-280480-m02: {Name:mk44467653f141cacbcb1a78a0f883fce47dd650 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:48:39.257105   98430 start.go:369] acquired machines lock for "multinode-280480-m02" in 71.728µs
	I1103 20:48:39.257132   98430 start.go:93] Provisioning new machine with config: &{Name:multinode-280480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-280480 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1103 20:48:39.257227   98430 start.go:125] createHost starting for "m02" (driver="docker")
	I1103 20:48:39.259983   98430 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1103 20:48:39.260089   98430 start.go:159] libmachine.API.Create for "multinode-280480" (driver="docker")
	I1103 20:48:39.260118   98430 client.go:168] LocalClient.Create starting
	I1103 20:48:39.260189   98430 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem
	I1103 20:48:39.260222   98430 main.go:141] libmachine: Decoding PEM data...
	I1103 20:48:39.260246   98430 main.go:141] libmachine: Parsing certificate...
	I1103 20:48:39.260310   98430 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem
	I1103 20:48:39.260337   98430 main.go:141] libmachine: Decoding PEM data...
	I1103 20:48:39.260355   98430 main.go:141] libmachine: Parsing certificate...
	I1103 20:48:39.260602   98430 cli_runner.go:164] Run: docker network inspect multinode-280480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1103 20:48:39.274950   98430 network_create.go:77] Found existing network {name:multinode-280480 subnet:0xc0022a7020 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1103 20:48:39.274978   98430 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-280480-m02" container
	I1103 20:48:39.275037   98430 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1103 20:48:39.289392   98430 cli_runner.go:164] Run: docker volume create multinode-280480-m02 --label name.minikube.sigs.k8s.io=multinode-280480-m02 --label created_by.minikube.sigs.k8s.io=true
	I1103 20:48:39.304630   98430 oci.go:103] Successfully created a docker volume multinode-280480-m02
	I1103 20:48:39.304699   98430 cli_runner.go:164] Run: docker run --rm --name multinode-280480-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-280480-m02 --entrypoint /usr/bin/test -v multinode-280480-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -d /var/lib
	I1103 20:48:39.788833   98430 oci.go:107] Successfully prepared a docker volume multinode-280480-m02
	I1103 20:48:39.788867   98430 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:48:39.788889   98430 kic.go:194] Starting extracting preloaded images to volume ...
	I1103 20:48:39.788955   98430 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-280480-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -I lz4 -xf /preloaded.tar -C /extractDir
	I1103 20:48:44.779088   98430 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-280480-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 -I lz4 -xf /preloaded.tar -C /extractDir: (4.990092278s)
	I1103 20:48:44.779126   98430 kic.go:203] duration metric: took 4.990234 seconds to extract preloaded images to volume
	W1103 20:48:44.779263   98430 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1103 20:48:44.779379   98430 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1103 20:48:44.829280   98430 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-280480-m02 --name multinode-280480-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-280480-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-280480-m02 --network multinode-280480 --ip 192.168.58.3 --volume multinode-280480-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89
	I1103 20:48:45.108176   98430 cli_runner.go:164] Run: docker container inspect multinode-280480-m02 --format={{.State.Running}}
	I1103 20:48:45.124111   98430 cli_runner.go:164] Run: docker container inspect multinode-280480-m02 --format={{.State.Status}}
	I1103 20:48:45.140359   98430 cli_runner.go:164] Run: docker exec multinode-280480-m02 stat /var/lib/dpkg/alternatives/iptables
	I1103 20:48:45.201569   98430 oci.go:144] the created container "multinode-280480-m02" has a running status.
	I1103 20:48:45.201598   98430 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa...
	I1103 20:48:45.321929   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1103 20:48:45.321972   98430 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1103 20:48:45.341737   98430 cli_runner.go:164] Run: docker container inspect multinode-280480-m02 --format={{.State.Status}}
	I1103 20:48:45.358091   98430 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1103 20:48:45.358110   98430 kic_runner.go:114] Args: [docker exec --privileged multinode-280480-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1103 20:48:45.417174   98430 cli_runner.go:164] Run: docker container inspect multinode-280480-m02 --format={{.State.Status}}
	I1103 20:48:45.432973   98430 machine.go:88] provisioning docker machine ...
	I1103 20:48:45.433010   98430 ubuntu.go:169] provisioning hostname "multinode-280480-m02"
	I1103 20:48:45.433059   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:48:45.450137   98430 main.go:141] libmachine: Using SSH client type: native
	I1103 20:48:45.450678   98430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1103 20:48:45.450705   98430 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-280480-m02 && echo "multinode-280480-m02" | sudo tee /etc/hostname
	I1103 20:48:45.451603   98430 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36900->127.0.0.1:32854: read: connection reset by peer
	I1103 20:48:48.578827   98430 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-280480-m02
	
	I1103 20:48:48.578894   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:48:48.596406   98430 main.go:141] libmachine: Using SSH client type: native
	I1103 20:48:48.596906   98430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1103 20:48:48.596937   98430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-280480-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-280480-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-280480-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1103 20:48:48.716178   98430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1103 20:48:48.716206   98430 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17545-5130/.minikube CaCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17545-5130/.minikube}
	I1103 20:48:48.716220   98430 ubuntu.go:177] setting up certificates
	I1103 20:48:48.716228   98430 provision.go:83] configureAuth start
	I1103 20:48:48.716270   98430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280480-m02
	I1103 20:48:48.732499   98430 provision.go:138] copyHostCerts
	I1103 20:48:48.732540   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem
	I1103 20:48:48.732576   98430 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem, removing ...
	I1103 20:48:48.732590   98430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem
	I1103 20:48:48.732666   98430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem (1082 bytes)
	I1103 20:48:48.732756   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem
	I1103 20:48:48.732781   98430 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem, removing ...
	I1103 20:48:48.732792   98430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem
	I1103 20:48:48.732828   98430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem (1123 bytes)
	I1103 20:48:48.732888   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem
	I1103 20:48:48.732918   98430 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem, removing ...
	I1103 20:48:48.732930   98430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem
	I1103 20:48:48.732965   98430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem (1679 bytes)
	I1103 20:48:48.733030   98430 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem org=jenkins.multinode-280480-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-280480-m02]
	I1103 20:48:48.964070   98430 provision.go:172] copyRemoteCerts
	I1103 20:48:48.964121   98430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1103 20:48:48.964149   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:48:48.979665   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa Username:docker}
	I1103 20:48:49.068267   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1103 20:48:49.068316   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1103 20:48:49.088392   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1103 20:48:49.088462   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1103 20:48:49.108169   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1103 20:48:49.108218   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1103 20:48:49.127831   98430 provision.go:86] duration metric: configureAuth took 411.593773ms
	I1103 20:48:49.127860   98430 ubuntu.go:193] setting minikube options for container-runtime
	I1103 20:48:49.128019   98430 config.go:182] Loaded profile config "multinode-280480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:48:49.128133   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:48:49.144029   98430 main.go:141] libmachine: Using SSH client type: native
	I1103 20:48:49.144490   98430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1103 20:48:49.144514   98430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1103 20:48:49.343094   98430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1103 20:48:49.343124   98430 machine.go:91] provisioned docker machine in 3.910128422s
	I1103 20:48:49.343135   98430 client.go:171] LocalClient.Create took 10.083007689s
	I1103 20:48:49.343154   98430 start.go:167] duration metric: libmachine.API.Create for "multinode-280480" took 10.083064123s
	I1103 20:48:49.343166   98430 start.go:300] post-start starting for "multinode-280480-m02" (driver="docker")
	I1103 20:48:49.343180   98430 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1103 20:48:49.343245   98430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1103 20:48:49.343295   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:48:49.360203   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa Username:docker}
	I1103 20:48:49.448577   98430 ssh_runner.go:195] Run: cat /etc/os-release
	I1103 20:48:49.451694   98430 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1103 20:48:49.451717   98430 command_runner.go:130] > NAME="Ubuntu"
	I1103 20:48:49.451726   98430 command_runner.go:130] > VERSION_ID="22.04"
	I1103 20:48:49.451735   98430 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1103 20:48:49.451742   98430 command_runner.go:130] > VERSION_CODENAME=jammy
	I1103 20:48:49.451748   98430 command_runner.go:130] > ID=ubuntu
	I1103 20:48:49.451753   98430 command_runner.go:130] > ID_LIKE=debian
	I1103 20:48:49.451766   98430 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1103 20:48:49.451779   98430 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1103 20:48:49.451792   98430 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1103 20:48:49.451808   98430 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1103 20:48:49.451829   98430 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1103 20:48:49.451874   98430 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1103 20:48:49.451913   98430 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1103 20:48:49.451926   98430 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1103 20:48:49.451934   98430 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1103 20:48:49.451945   98430 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/addons for local assets ...
	I1103 20:48:49.452000   98430 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/files for local assets ...
	I1103 20:48:49.452120   98430 filesync.go:149] local asset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> 118872.pem in /etc/ssl/certs
	I1103 20:48:49.452134   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> /etc/ssl/certs/118872.pem
	I1103 20:48:49.452238   98430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1103 20:48:49.459783   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem --> /etc/ssl/certs/118872.pem (1708 bytes)
	I1103 20:48:49.480233   98430 start.go:303] post-start completed in 137.052814ms
	I1103 20:48:49.480577   98430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280480-m02
	I1103 20:48:49.496273   98430 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/config.json ...
	I1103 20:48:49.496609   98430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1103 20:48:49.496656   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:48:49.512040   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa Username:docker}
	I1103 20:48:49.596767   98430 command_runner.go:130] > 20%!
	(MISSING)I1103 20:48:49.597022   98430 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1103 20:48:49.600847   98430 command_runner.go:130] > 234G
	I1103 20:48:49.601045   98430 start.go:128] duration metric: createHost completed in 10.343803479s
	I1103 20:48:49.601066   98430 start.go:83] releasing machines lock for "multinode-280480-m02", held for 10.343945881s
	I1103 20:48:49.601129   98430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280480-m02
	I1103 20:48:49.619582   98430 out.go:177] * Found network options:
	I1103 20:48:49.621106   98430 out.go:177]   - NO_PROXY=192.168.58.2
	W1103 20:48:49.622546   98430 proxy.go:119] fail to check proxy env: Error ip not in block
	W1103 20:48:49.622577   98430 proxy.go:119] fail to check proxy env: Error ip not in block
	I1103 20:48:49.622637   98430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1103 20:48:49.622668   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:48:49.622729   98430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1103 20:48:49.622792   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:48:49.640163   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa Username:docker}
	I1103 20:48:49.640169   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa Username:docker}
	I1103 20:48:49.859061   98430 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1103 20:48:49.859096   98430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1103 20:48:49.862769   98430 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1103 20:48:49.862794   98430 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1103 20:48:49.862808   98430 command_runner.go:130] > Device: b0h/176d	Inode: 540722      Links: 1
	I1103 20:48:49.862818   98430 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1103 20:48:49.862828   98430 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1103 20:48:49.862842   98430 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1103 20:48:49.862850   98430 command_runner.go:130] > Change: 2023-11-03 20:29:19.315787835 +0000
	I1103 20:48:49.862862   98430 command_runner.go:130] >  Birth: 2023-11-03 20:29:19.315787835 +0000
	I1103 20:48:49.862984   98430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:48:49.879229   98430 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1103 20:48:49.879291   98430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:48:49.904532   98430 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1103 20:48:49.904568   98430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1103 20:48:49.904575   98430 start.go:472] detecting cgroup driver to use...
	I1103 20:48:49.904601   98430 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1103 20:48:49.904642   98430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1103 20:48:49.917482   98430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1103 20:48:49.926562   98430 docker.go:203] disabling cri-docker service (if available) ...
	I1103 20:48:49.926599   98430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1103 20:48:49.937902   98430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1103 20:48:49.949932   98430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1103 20:48:50.019732   98430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1103 20:48:50.095031   98430 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1103 20:48:50.095062   98430 docker.go:219] disabling docker service ...
	I1103 20:48:50.095112   98430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1103 20:48:50.110957   98430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1103 20:48:50.120314   98430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1103 20:48:50.191917   98430 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1103 20:48:50.191989   98430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1103 20:48:50.267622   98430 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1103 20:48:50.267693   98430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1103 20:48:50.277299   98430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1103 20:48:50.289809   98430 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1103 20:48:50.290504   98430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1103 20:48:50.290554   98430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:48:50.298489   98430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1103 20:48:50.298560   98430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:48:50.306648   98430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:48:50.314345   98430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:48:50.322243   98430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1103 20:48:50.329457   98430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1103 20:48:50.336116   98430 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1103 20:48:50.336157   98430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1103 20:48:50.342852   98430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1103 20:48:50.410205   98430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1103 20:48:50.499021   98430 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1103 20:48:50.499079   98430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1103 20:48:50.502257   98430 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1103 20:48:50.502284   98430 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1103 20:48:50.502292   98430 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1103 20:48:50.502299   98430 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1103 20:48:50.502304   98430 command_runner.go:130] > Access: 2023-11-03 20:48:50.488138250 +0000
	I1103 20:48:50.502310   98430 command_runner.go:130] > Modify: 2023-11-03 20:48:50.488138250 +0000
	I1103 20:48:50.502315   98430 command_runner.go:130] > Change: 2023-11-03 20:48:50.488138250 +0000
	I1103 20:48:50.502322   98430 command_runner.go:130] >  Birth: -
	I1103 20:48:50.502341   98430 start.go:540] Will wait 60s for crictl version
	I1103 20:48:50.502383   98430 ssh_runner.go:195] Run: which crictl
	I1103 20:48:50.505170   98430 command_runner.go:130] > /usr/bin/crictl
	I1103 20:48:50.505288   98430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1103 20:48:50.534932   98430 command_runner.go:130] > Version:  0.1.0
	I1103 20:48:50.534950   98430 command_runner.go:130] > RuntimeName:  cri-o
	I1103 20:48:50.534954   98430 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1103 20:48:50.534959   98430 command_runner.go:130] > RuntimeApiVersion:  v1
	I1103 20:48:50.534973   98430 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1103 20:48:50.535014   98430 ssh_runner.go:195] Run: crio --version
	I1103 20:48:50.564672   98430 command_runner.go:130] > crio version 1.24.6
	I1103 20:48:50.564696   98430 command_runner.go:130] > Version:          1.24.6
	I1103 20:48:50.564707   98430 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1103 20:48:50.564715   98430 command_runner.go:130] > GitTreeState:     clean
	I1103 20:48:50.564725   98430 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1103 20:48:50.564734   98430 command_runner.go:130] > GoVersion:        go1.18.2
	I1103 20:48:50.564745   98430 command_runner.go:130] > Compiler:         gc
	I1103 20:48:50.564756   98430 command_runner.go:130] > Platform:         linux/amd64
	I1103 20:48:50.564774   98430 command_runner.go:130] > Linkmode:         dynamic
	I1103 20:48:50.564789   98430 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1103 20:48:50.564801   98430 command_runner.go:130] > SeccompEnabled:   true
	I1103 20:48:50.564808   98430 command_runner.go:130] > AppArmorEnabled:  false
	I1103 20:48:50.566227   98430 ssh_runner.go:195] Run: crio --version
	I1103 20:48:50.596361   98430 command_runner.go:130] > crio version 1.24.6
	I1103 20:48:50.596388   98430 command_runner.go:130] > Version:          1.24.6
	I1103 20:48:50.596400   98430 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1103 20:48:50.596407   98430 command_runner.go:130] > GitTreeState:     clean
	I1103 20:48:50.596417   98430 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1103 20:48:50.596441   98430 command_runner.go:130] > GoVersion:        go1.18.2
	I1103 20:48:50.596449   98430 command_runner.go:130] > Compiler:         gc
	I1103 20:48:50.596464   98430 command_runner.go:130] > Platform:         linux/amd64
	I1103 20:48:50.596481   98430 command_runner.go:130] > Linkmode:         dynamic
	I1103 20:48:50.596495   98430 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1103 20:48:50.596505   98430 command_runner.go:130] > SeccompEnabled:   true
	I1103 20:48:50.596512   98430 command_runner.go:130] > AppArmorEnabled:  false
	I1103 20:48:50.599483   98430 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1103 20:48:50.600958   98430 out.go:177]   - env NO_PROXY=192.168.58.2
	I1103 20:48:50.602414   98430 cli_runner.go:164] Run: docker network inspect multinode-280480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1103 20:48:50.619226   98430 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1103 20:48:50.622543   98430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1103 20:48:50.632062   98430 certs.go:56] Setting up /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480 for IP: 192.168.58.3
	I1103 20:48:50.632093   98430 certs.go:190] acquiring lock for shared ca certs: {Name:mk18b7761724bd0081d8ca2b791d44e447ae6553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:48:50.632219   98430 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key
	I1103 20:48:50.632259   98430 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key
	I1103 20:48:50.632275   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1103 20:48:50.632297   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1103 20:48:50.632313   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1103 20:48:50.632325   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1103 20:48:50.632378   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem (1338 bytes)
	W1103 20:48:50.632417   98430 certs.go:433] ignoring /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887_empty.pem, impossibly tiny 0 bytes
	I1103 20:48:50.632453   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem (1675 bytes)
	I1103 20:48:50.632491   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem (1082 bytes)
	I1103 20:48:50.632524   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem (1123 bytes)
	I1103 20:48:50.632553   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem (1679 bytes)
	I1103 20:48:50.632616   98430 certs.go:437] found cert: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem (1708 bytes)
	I1103 20:48:50.632660   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> /usr/share/ca-certificates/118872.pem
	I1103 20:48:50.632679   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:48:50.632726   98430 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem -> /usr/share/ca-certificates/11887.pem
	I1103 20:48:50.633173   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1103 20:48:50.654213   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1103 20:48:50.674341   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1103 20:48:50.695134   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1103 20:48:50.715470   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem --> /usr/share/ca-certificates/118872.pem (1708 bytes)
	I1103 20:48:50.735680   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1103 20:48:50.755961   98430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/11887.pem --> /usr/share/ca-certificates/11887.pem (1338 bytes)
	I1103 20:48:50.775599   98430 ssh_runner.go:195] Run: openssl version
	I1103 20:48:50.780227   98430 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1103 20:48:50.780436   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/118872.pem && ln -fs /usr/share/ca-certificates/118872.pem /etc/ssl/certs/118872.pem"
	I1103 20:48:50.788032   98430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/118872.pem
	I1103 20:48:50.790942   98430 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  3 20:35 /usr/share/ca-certificates/118872.pem
	I1103 20:48:50.790989   98430 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  3 20:35 /usr/share/ca-certificates/118872.pem
	I1103 20:48:50.791032   98430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/118872.pem
	I1103 20:48:50.796981   98430 command_runner.go:130] > 3ec20f2e
	I1103 20:48:50.797041   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/118872.pem /etc/ssl/certs/3ec20f2e.0"
	I1103 20:48:50.804531   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1103 20:48:50.811922   98430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:48:50.814761   98430 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  3 20:29 /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:48:50.814790   98430 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  3 20:29 /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:48:50.814825   98430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1103 20:48:50.820401   98430 command_runner.go:130] > b5213941
	I1103 20:48:50.820562   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1103 20:48:50.828041   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11887.pem && ln -fs /usr/share/ca-certificates/11887.pem /etc/ssl/certs/11887.pem"
	I1103 20:48:50.835843   98430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11887.pem
	I1103 20:48:50.838624   98430 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  3 20:35 /usr/share/ca-certificates/11887.pem
	I1103 20:48:50.838652   98430 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  3 20:35 /usr/share/ca-certificates/11887.pem
	I1103 20:48:50.838681   98430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11887.pem
	I1103 20:48:50.844284   98430 command_runner.go:130] > 51391683
	I1103 20:48:50.844348   98430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11887.pem /etc/ssl/certs/51391683.0"
	I1103 20:48:50.851790   98430 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1103 20:48:50.854554   98430 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1103 20:48:50.854598   98430 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1103 20:48:50.854675   98430 ssh_runner.go:195] Run: crio config
	I1103 20:48:50.888190   98430 command_runner.go:130] ! time="2023-11-03 20:48:50.887804059Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1103 20:48:50.888228   98430 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1103 20:48:50.892705   98430 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1103 20:48:50.892728   98430 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1103 20:48:50.892738   98430 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1103 20:48:50.892744   98430 command_runner.go:130] > #
	I1103 20:48:50.892754   98430 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1103 20:48:50.892761   98430 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1103 20:48:50.892771   98430 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1103 20:48:50.892782   98430 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1103 20:48:50.892786   98430 command_runner.go:130] > # reload'.
	I1103 20:48:50.892794   98430 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1103 20:48:50.892803   98430 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1103 20:48:50.892813   98430 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1103 20:48:50.892826   98430 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1103 20:48:50.892836   98430 command_runner.go:130] > [crio]
	I1103 20:48:50.892849   98430 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1103 20:48:50.892857   98430 command_runner.go:130] > # containers images, in this directory.
	I1103 20:48:50.892870   98430 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1103 20:48:50.892879   98430 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1103 20:48:50.892885   98430 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1103 20:48:50.892894   98430 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1103 20:48:50.892905   98430 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1103 20:48:50.892917   98430 command_runner.go:130] > # storage_driver = "vfs"
	I1103 20:48:50.892930   98430 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1103 20:48:50.892943   98430 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1103 20:48:50.892950   98430 command_runner.go:130] > # storage_option = [
	I1103 20:48:50.892954   98430 command_runner.go:130] > # ]
	I1103 20:48:50.892963   98430 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1103 20:48:50.892972   98430 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1103 20:48:50.892979   98430 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1103 20:48:50.892988   98430 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1103 20:48:50.893008   98430 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1103 20:48:50.893022   98430 command_runner.go:130] > # always happen on a node reboot
	I1103 20:48:50.893031   98430 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1103 20:48:50.893044   98430 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1103 20:48:50.893054   98430 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1103 20:48:50.893071   98430 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1103 20:48:50.893082   98430 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1103 20:48:50.893099   98430 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1103 20:48:50.893115   98430 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1103 20:48:50.893125   98430 command_runner.go:130] > # internal_wipe = true
	I1103 20:48:50.893137   98430 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1103 20:48:50.893150   98430 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1103 20:48:50.893162   98430 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1103 20:48:50.893174   98430 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1103 20:48:50.893188   98430 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1103 20:48:50.893200   98430 command_runner.go:130] > [crio.api]
	I1103 20:48:50.893209   98430 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1103 20:48:50.893217   98430 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1103 20:48:50.893227   98430 command_runner.go:130] > # IP address on which the stream server will listen.
	I1103 20:48:50.893235   98430 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1103 20:48:50.893245   98430 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1103 20:48:50.893256   98430 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1103 20:48:50.893269   98430 command_runner.go:130] > # stream_port = "0"
	I1103 20:48:50.893278   98430 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1103 20:48:50.893289   98430 command_runner.go:130] > # stream_enable_tls = false
	I1103 20:48:50.893302   98430 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1103 20:48:50.893312   98430 command_runner.go:130] > # stream_idle_timeout = ""
	I1103 20:48:50.893324   98430 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1103 20:48:50.893334   98430 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1103 20:48:50.893343   98430 command_runner.go:130] > # minutes.
	I1103 20:48:50.893357   98430 command_runner.go:130] > # stream_tls_cert = ""
	I1103 20:48:50.893371   98430 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1103 20:48:50.893385   98430 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1103 20:48:50.893395   98430 command_runner.go:130] > # stream_tls_key = ""
	I1103 20:48:50.893409   98430 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1103 20:48:50.893419   98430 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1103 20:48:50.893431   98430 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1103 20:48:50.893442   98430 command_runner.go:130] > # stream_tls_ca = ""
	I1103 20:48:50.893458   98430 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1103 20:48:50.893469   98430 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1103 20:48:50.893484   98430 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1103 20:48:50.893494   98430 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1103 20:48:50.893527   98430 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1103 20:48:50.893541   98430 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1103 20:48:50.893548   98430 command_runner.go:130] > [crio.runtime]
	I1103 20:48:50.893570   98430 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1103 20:48:50.893582   98430 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1103 20:48:50.893589   98430 command_runner.go:130] > # "nofile=1024:2048"
	I1103 20:48:50.893602   98430 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1103 20:48:50.893615   98430 command_runner.go:130] > # default_ulimits = [
	I1103 20:48:50.893625   98430 command_runner.go:130] > # ]
	I1103 20:48:50.893638   98430 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1103 20:48:50.893647   98430 command_runner.go:130] > # no_pivot = false
	I1103 20:48:50.893660   98430 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1103 20:48:50.893672   98430 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1103 20:48:50.893680   98430 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1103 20:48:50.893693   98430 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1103 20:48:50.893705   98430 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1103 20:48:50.893720   98430 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1103 20:48:50.893729   98430 command_runner.go:130] > # conmon = ""
	I1103 20:48:50.893740   98430 command_runner.go:130] > # Cgroup setting for conmon
	I1103 20:48:50.893753   98430 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1103 20:48:50.893761   98430 command_runner.go:130] > conmon_cgroup = "pod"
	I1103 20:48:50.893770   98430 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1103 20:48:50.893785   98430 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1103 20:48:50.893799   98430 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1103 20:48:50.893811   98430 command_runner.go:130] > # conmon_env = [
	I1103 20:48:50.893820   98430 command_runner.go:130] > # ]
	I1103 20:48:50.893830   98430 command_runner.go:130] > # Additional environment variables to set for all the
	I1103 20:48:50.893841   98430 command_runner.go:130] > # containers. These are overridden if set in the
	I1103 20:48:50.893849   98430 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1103 20:48:50.893859   98430 command_runner.go:130] > # default_env = [
	I1103 20:48:50.893868   98430 command_runner.go:130] > # ]
	I1103 20:48:50.893881   98430 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1103 20:48:50.893891   98430 command_runner.go:130] > # selinux = false
	I1103 20:48:50.893905   98430 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1103 20:48:50.893919   98430 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1103 20:48:50.893930   98430 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1103 20:48:50.893937   98430 command_runner.go:130] > # seccomp_profile = ""
	I1103 20:48:50.893946   98430 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1103 20:48:50.893959   98430 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1103 20:48:50.893973   98430 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1103 20:48:50.893983   98430 command_runner.go:130] > # which might increase security.
	I1103 20:48:50.893991   98430 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1103 20:48:50.894010   98430 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1103 20:48:50.894021   98430 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1103 20:48:50.894033   98430 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1103 20:48:50.894048   98430 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1103 20:48:50.894060   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:48:50.894071   98430 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1103 20:48:50.894083   98430 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1103 20:48:50.894094   98430 command_runner.go:130] > # the cgroup blockio controller.
	I1103 20:48:50.894103   98430 command_runner.go:130] > # blockio_config_file = ""
	I1103 20:48:50.894114   98430 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1103 20:48:50.894125   98430 command_runner.go:130] > # irqbalance daemon.
	I1103 20:48:50.894138   98430 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1103 20:48:50.894152   98430 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1103 20:48:50.894164   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:48:50.894174   98430 command_runner.go:130] > # rdt_config_file = ""
	I1103 20:48:50.894185   98430 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1103 20:48:50.894204   98430 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1103 20:48:50.894218   98430 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1103 20:48:50.894232   98430 command_runner.go:130] > # separate_pull_cgroup = ""
	I1103 20:48:50.894246   98430 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1103 20:48:50.894259   98430 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1103 20:48:50.894269   98430 command_runner.go:130] > # will be added.
	I1103 20:48:50.894277   98430 command_runner.go:130] > # default_capabilities = [
	I1103 20:48:50.894286   98430 command_runner.go:130] > # 	"CHOWN",
	I1103 20:48:50.894295   98430 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1103 20:48:50.894306   98430 command_runner.go:130] > # 	"FSETID",
	I1103 20:48:50.894316   98430 command_runner.go:130] > # 	"FOWNER",
	I1103 20:48:50.894325   98430 command_runner.go:130] > # 	"SETGID",
	I1103 20:48:50.894334   98430 command_runner.go:130] > # 	"SETUID",
	I1103 20:48:50.894344   98430 command_runner.go:130] > # 	"SETPCAP",
	I1103 20:48:50.894351   98430 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1103 20:48:50.894360   98430 command_runner.go:130] > # 	"KILL",
	I1103 20:48:50.894366   98430 command_runner.go:130] > # ]
	I1103 20:48:50.894378   98430 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1103 20:48:50.894393   98430 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1103 20:48:50.894404   98430 command_runner.go:130] > # add_inheritable_capabilities = true
	I1103 20:48:50.894418   98430 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1103 20:48:50.894431   98430 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1103 20:48:50.894441   98430 command_runner.go:130] > # default_sysctls = [
	I1103 20:48:50.894448   98430 command_runner.go:130] > # ]
	I1103 20:48:50.894453   98430 command_runner.go:130] > # List of devices on the host that a
	I1103 20:48:50.894467   98430 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1103 20:48:50.894478   98430 command_runner.go:130] > # allowed_devices = [
	I1103 20:48:50.894488   98430 command_runner.go:130] > # 	"/dev/fuse",
	I1103 20:48:50.894496   98430 command_runner.go:130] > # ]
	I1103 20:48:50.894507   98430 command_runner.go:130] > # List of additional devices. specified as
	I1103 20:48:50.894545   98430 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1103 20:48:50.894559   98430 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1103 20:48:50.894569   98430 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1103 20:48:50.894580   98430 command_runner.go:130] > # additional_devices = [
	I1103 20:48:50.894589   98430 command_runner.go:130] > # ]
	I1103 20:48:50.894601   98430 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1103 20:48:50.894610   98430 command_runner.go:130] > # cdi_spec_dirs = [
	I1103 20:48:50.894619   98430 command_runner.go:130] > # 	"/etc/cdi",
	I1103 20:48:50.894628   98430 command_runner.go:130] > # 	"/var/run/cdi",
	I1103 20:48:50.894637   98430 command_runner.go:130] > # ]
	I1103 20:48:50.894651   98430 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1103 20:48:50.894664   98430 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1103 20:48:50.894675   98430 command_runner.go:130] > # Defaults to false.
	I1103 20:48:50.894686   98430 command_runner.go:130] > # device_ownership_from_security_context = false
	I1103 20:48:50.894700   98430 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1103 20:48:50.894709   98430 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1103 20:48:50.894717   98430 command_runner.go:130] > # hooks_dir = [
	I1103 20:48:50.894728   98430 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1103 20:48:50.894738   98430 command_runner.go:130] > # ]
	I1103 20:48:50.894752   98430 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1103 20:48:50.894765   98430 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1103 20:48:50.894777   98430 command_runner.go:130] > # its default mounts from the following two files:
	I1103 20:48:50.894785   98430 command_runner.go:130] > #
	I1103 20:48:50.894795   98430 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1103 20:48:50.894806   98430 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1103 20:48:50.894820   98430 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1103 20:48:50.894829   98430 command_runner.go:130] > #
	I1103 20:48:50.894843   98430 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1103 20:48:50.894857   98430 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1103 20:48:50.894870   98430 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1103 20:48:50.894881   98430 command_runner.go:130] > #      only add mounts it finds in this file.
	I1103 20:48:50.894888   98430 command_runner.go:130] > #
	I1103 20:48:50.894899   98430 command_runner.go:130] > # default_mounts_file = ""
	I1103 20:48:50.894912   98430 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1103 20:48:50.894925   98430 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1103 20:48:50.894935   98430 command_runner.go:130] > # pids_limit = 0
	I1103 20:48:50.894949   98430 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1103 20:48:50.894961   98430 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1103 20:48:50.894971   98430 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1103 20:48:50.894987   98430 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1103 20:48:50.895002   98430 command_runner.go:130] > # log_size_max = -1
	I1103 20:48:50.895017   98430 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1103 20:48:50.895027   98430 command_runner.go:130] > # log_to_journald = false
	I1103 20:48:50.895040   98430 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1103 20:48:50.895053   98430 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1103 20:48:50.895064   98430 command_runner.go:130] > # Path to directory for container attach sockets.
	I1103 20:48:50.895076   98430 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1103 20:48:50.895089   98430 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1103 20:48:50.895099   98430 command_runner.go:130] > # bind_mount_prefix = ""
	I1103 20:48:50.895111   98430 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1103 20:48:50.895121   98430 command_runner.go:130] > # read_only = false
	I1103 20:48:50.895134   98430 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1103 20:48:50.895143   98430 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1103 20:48:50.895153   98430 command_runner.go:130] > # live configuration reload.
	I1103 20:48:50.895163   98430 command_runner.go:130] > # log_level = "info"
	I1103 20:48:50.895177   98430 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1103 20:48:50.895188   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:48:50.895198   98430 command_runner.go:130] > # log_filter = ""
	I1103 20:48:50.895211   98430 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1103 20:48:50.895223   98430 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1103 20:48:50.895230   98430 command_runner.go:130] > # separated by comma.
	I1103 20:48:50.895236   98430 command_runner.go:130] > # uid_mappings = ""
	I1103 20:48:50.895251   98430 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1103 20:48:50.895265   98430 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1103 20:48:50.895274   98430 command_runner.go:130] > # separated by comma.
	I1103 20:48:50.895284   98430 command_runner.go:130] > # gid_mappings = ""
	I1103 20:48:50.895295   98430 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1103 20:48:50.895307   98430 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1103 20:48:50.895319   98430 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1103 20:48:50.895329   98430 command_runner.go:130] > # minimum_mappable_uid = -1
	I1103 20:48:50.895343   98430 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1103 20:48:50.895357   98430 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1103 20:48:50.895370   98430 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1103 20:48:50.895380   98430 command_runner.go:130] > # minimum_mappable_gid = -1
	I1103 20:48:50.895393   98430 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1103 20:48:50.895402   98430 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1103 20:48:50.895414   98430 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1103 20:48:50.895426   98430 command_runner.go:130] > # ctr_stop_timeout = 30
	I1103 20:48:50.895439   98430 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1103 20:48:50.895456   98430 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1103 20:48:50.895468   98430 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1103 20:48:50.895479   98430 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1103 20:48:50.895486   98430 command_runner.go:130] > # drop_infra_ctr = true
	I1103 20:48:50.895495   98430 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1103 20:48:50.895509   98430 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1103 20:48:50.895524   98430 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1103 20:48:50.895534   98430 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1103 20:48:50.895545   98430 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1103 20:48:50.895556   98430 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1103 20:48:50.895566   98430 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1103 20:48:50.895576   98430 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1103 20:48:50.895585   98430 command_runner.go:130] > # pinns_path = ""
	I1103 20:48:50.895599   98430 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1103 20:48:50.895613   98430 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1103 20:48:50.895626   98430 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1103 20:48:50.895637   98430 command_runner.go:130] > # default_runtime = "runc"
	I1103 20:48:50.895649   98430 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1103 20:48:50.895660   98430 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1103 20:48:50.895684   98430 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1103 20:48:50.895697   98430 command_runner.go:130] > # creation as a file is not desired either.
	I1103 20:48:50.895713   98430 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1103 20:48:50.895725   98430 command_runner.go:130] > # the hostname is being managed dynamically.
	I1103 20:48:50.895736   98430 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1103 20:48:50.895743   98430 command_runner.go:130] > # ]
	I1103 20:48:50.895750   98430 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1103 20:48:50.895764   98430 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1103 20:48:50.895778   98430 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1103 20:48:50.895792   98430 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1103 20:48:50.895800   98430 command_runner.go:130] > #
	I1103 20:48:50.895811   98430 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1103 20:48:50.895823   98430 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1103 20:48:50.895831   98430 command_runner.go:130] > #  runtime_type = "oci"
	I1103 20:48:50.895838   98430 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1103 20:48:50.895851   98430 command_runner.go:130] > #  privileged_without_host_devices = false
	I1103 20:48:50.895862   98430 command_runner.go:130] > #  allowed_annotations = []
	I1103 20:48:50.895871   98430 command_runner.go:130] > # Where:
	I1103 20:48:50.895884   98430 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1103 20:48:50.895897   98430 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1103 20:48:50.895910   98430 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1103 20:48:50.895919   98430 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1103 20:48:50.895928   98430 command_runner.go:130] > #   in $PATH.
	I1103 20:48:50.895942   98430 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1103 20:48:50.895954   98430 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1103 20:48:50.895968   98430 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1103 20:48:50.895977   98430 command_runner.go:130] > #   state.
	I1103 20:48:50.895988   98430 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1103 20:48:50.896003   98430 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1103 20:48:50.896014   98430 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1103 20:48:50.896028   98430 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1103 20:48:50.896042   98430 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1103 20:48:50.896056   98430 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1103 20:48:50.896067   98430 command_runner.go:130] > #   The currently recognized values are:
	I1103 20:48:50.896081   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1103 20:48:50.896092   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1103 20:48:50.896106   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1103 20:48:50.896121   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1103 20:48:50.896137   98430 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1103 20:48:50.896150   98430 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1103 20:48:50.896163   98430 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1103 20:48:50.896175   98430 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1103 20:48:50.896184   98430 command_runner.go:130] > #   should be moved to the container's cgroup
	I1103 20:48:50.896194   98430 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1103 20:48:50.896207   98430 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1103 20:48:50.896217   98430 command_runner.go:130] > runtime_type = "oci"
	I1103 20:48:50.896227   98430 command_runner.go:130] > runtime_root = "/run/runc"
	I1103 20:48:50.896234   98430 command_runner.go:130] > runtime_config_path = ""
	I1103 20:48:50.896245   98430 command_runner.go:130] > monitor_path = ""
	I1103 20:48:50.896254   98430 command_runner.go:130] > monitor_cgroup = ""
	I1103 20:48:50.896262   98430 command_runner.go:130] > monitor_exec_cgroup = ""
	I1103 20:48:50.896292   98430 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1103 20:48:50.896303   98430 command_runner.go:130] > # running containers
	I1103 20:48:50.896313   98430 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1103 20:48:50.896324   98430 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1103 20:48:50.896338   98430 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1103 20:48:50.896349   98430 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1103 20:48:50.896358   98430 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1103 20:48:50.896368   98430 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1103 20:48:50.896380   98430 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1103 20:48:50.896391   98430 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1103 20:48:50.896402   98430 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1103 20:48:50.896410   98430 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1103 20:48:50.896445   98430 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1103 20:48:50.896459   98430 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1103 20:48:50.896472   98430 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1103 20:48:50.896488   98430 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1103 20:48:50.896500   98430 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1103 20:48:50.896511   98430 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1103 20:48:50.896531   98430 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1103 20:48:50.896548   98430 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1103 20:48:50.896560   98430 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1103 20:48:50.896576   98430 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1103 20:48:50.896585   98430 command_runner.go:130] > # Example:
	I1103 20:48:50.896590   98430 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1103 20:48:50.896601   98430 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1103 20:48:50.896613   98430 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1103 20:48:50.896625   98430 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1103 20:48:50.896635   98430 command_runner.go:130] > # cpuset = 0
	I1103 20:48:50.896645   98430 command_runner.go:130] > # cpushares = "0-1"
	I1103 20:48:50.896655   98430 command_runner.go:130] > # Where:
	I1103 20:48:50.896666   98430 command_runner.go:130] > # The workload name is workload-type.
	I1103 20:48:50.896676   98430 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1103 20:48:50.896688   98430 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1103 20:48:50.896702   98430 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1103 20:48:50.896719   98430 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1103 20:48:50.896731   98430 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1103 20:48:50.896740   98430 command_runner.go:130] > # 
	I1103 20:48:50.896753   98430 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1103 20:48:50.896760   98430 command_runner.go:130] > #
	I1103 20:48:50.896768   98430 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1103 20:48:50.896781   98430 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1103 20:48:50.896795   98430 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1103 20:48:50.896809   98430 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1103 20:48:50.896822   98430 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1103 20:48:50.896831   98430 command_runner.go:130] > [crio.image]
	I1103 20:48:50.896842   98430 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1103 20:48:50.896851   98430 command_runner.go:130] > # default_transport = "docker://"
	I1103 20:48:50.896866   98430 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1103 20:48:50.896881   98430 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1103 20:48:50.896891   98430 command_runner.go:130] > # global_auth_file = ""
	I1103 20:48:50.896903   98430 command_runner.go:130] > # The image used to instantiate infra containers.
	I1103 20:48:50.896915   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:48:50.896925   98430 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1103 20:48:50.896936   98430 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1103 20:48:50.896947   98430 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1103 20:48:50.896959   98430 command_runner.go:130] > # This option supports live configuration reload.
	I1103 20:48:50.896970   98430 command_runner.go:130] > # pause_image_auth_file = ""
	I1103 20:48:50.896984   98430 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1103 20:48:50.896998   98430 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1103 20:48:50.897016   98430 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1103 20:48:50.897025   98430 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1103 20:48:50.897032   98430 command_runner.go:130] > # pause_command = "/pause"
	I1103 20:48:50.897038   98430 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1103 20:48:50.897048   98430 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1103 20:48:50.897061   98430 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1103 20:48:50.897075   98430 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1103 20:48:50.897088   98430 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1103 20:48:50.897098   98430 command_runner.go:130] > # signature_policy = ""
	I1103 20:48:50.897116   98430 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1103 20:48:50.897127   98430 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1103 20:48:50.897134   98430 command_runner.go:130] > # changing them here.
	I1103 20:48:50.897138   98430 command_runner.go:130] > # insecure_registries = [
	I1103 20:48:50.897144   98430 command_runner.go:130] > # ]
	I1103 20:48:50.897151   98430 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1103 20:48:50.897158   98430 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1103 20:48:50.897163   98430 command_runner.go:130] > # image_volumes = "mkdir"
	I1103 20:48:50.897170   98430 command_runner.go:130] > # Temporary directory to use for storing big files
	I1103 20:48:50.897175   98430 command_runner.go:130] > # big_files_temporary_dir = ""
	I1103 20:48:50.897183   98430 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1103 20:48:50.897190   98430 command_runner.go:130] > # CNI plugins.
	I1103 20:48:50.897194   98430 command_runner.go:130] > [crio.network]
	I1103 20:48:50.897207   98430 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1103 20:48:50.897221   98430 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1103 20:48:50.897232   98430 command_runner.go:130] > # cni_default_network = ""
	I1103 20:48:50.897245   98430 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1103 20:48:50.897256   98430 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1103 20:48:50.897268   98430 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1103 20:48:50.897275   98430 command_runner.go:130] > # plugin_dirs = [
	I1103 20:48:50.897279   98430 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1103 20:48:50.897286   98430 command_runner.go:130] > # ]
	I1103 20:48:50.897292   98430 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1103 20:48:50.897299   98430 command_runner.go:130] > [crio.metrics]
	I1103 20:48:50.897304   98430 command_runner.go:130] > # Globally enable or disable metrics support.
	I1103 20:48:50.897311   98430 command_runner.go:130] > # enable_metrics = false
	I1103 20:48:50.897316   98430 command_runner.go:130] > # Specify enabled metrics collectors.
	I1103 20:48:50.897323   98430 command_runner.go:130] > # Per default all metrics are enabled.
	I1103 20:48:50.897329   98430 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1103 20:48:50.897337   98430 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1103 20:48:50.897344   98430 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1103 20:48:50.897351   98430 command_runner.go:130] > # metrics_collectors = [
	I1103 20:48:50.897355   98430 command_runner.go:130] > # 	"operations",
	I1103 20:48:50.897362   98430 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1103 20:48:50.897369   98430 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1103 20:48:50.897374   98430 command_runner.go:130] > # 	"operations_errors",
	I1103 20:48:50.897380   98430 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1103 20:48:50.897384   98430 command_runner.go:130] > # 	"image_pulls_by_name",
	I1103 20:48:50.897391   98430 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1103 20:48:50.897395   98430 command_runner.go:130] > # 	"image_pulls_failures",
	I1103 20:48:50.897402   98430 command_runner.go:130] > # 	"image_pulls_successes",
	I1103 20:48:50.897406   98430 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1103 20:48:50.897413   98430 command_runner.go:130] > # 	"image_layer_reuse",
	I1103 20:48:50.897417   98430 command_runner.go:130] > # 	"containers_oom_total",
	I1103 20:48:50.897426   98430 command_runner.go:130] > # 	"containers_oom",
	I1103 20:48:50.897437   98430 command_runner.go:130] > # 	"processes_defunct",
	I1103 20:48:50.897447   98430 command_runner.go:130] > # 	"operations_total",
	I1103 20:48:50.897455   98430 command_runner.go:130] > # 	"operations_latency_seconds",
	I1103 20:48:50.897459   98430 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1103 20:48:50.897466   98430 command_runner.go:130] > # 	"operations_errors_total",
	I1103 20:48:50.897470   98430 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1103 20:48:50.897477   98430 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1103 20:48:50.897482   98430 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1103 20:48:50.897488   98430 command_runner.go:130] > # 	"image_pulls_success_total",
	I1103 20:48:50.897493   98430 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1103 20:48:50.897499   98430 command_runner.go:130] > # 	"containers_oom_count_total",
	I1103 20:48:50.897503   98430 command_runner.go:130] > # ]
	I1103 20:48:50.897511   98430 command_runner.go:130] > # The port on which the metrics server will listen.
	I1103 20:48:50.897515   98430 command_runner.go:130] > # metrics_port = 9090
	I1103 20:48:50.897522   98430 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1103 20:48:50.897526   98430 command_runner.go:130] > # metrics_socket = ""
	I1103 20:48:50.897533   98430 command_runner.go:130] > # The certificate for the secure metrics server.
	I1103 20:48:50.897541   98430 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1103 20:48:50.897549   98430 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1103 20:48:50.897556   98430 command_runner.go:130] > # certificate on any modification event.
	I1103 20:48:50.897560   98430 command_runner.go:130] > # metrics_cert = ""
	I1103 20:48:50.897568   98430 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1103 20:48:50.897573   98430 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1103 20:48:50.897579   98430 command_runner.go:130] > # metrics_key = ""
	I1103 20:48:50.897585   98430 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1103 20:48:50.897592   98430 command_runner.go:130] > [crio.tracing]
	I1103 20:48:50.897598   98430 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1103 20:48:50.897605   98430 command_runner.go:130] > # enable_tracing = false
	I1103 20:48:50.897610   98430 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1103 20:48:50.897617   98430 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1103 20:48:50.897622   98430 command_runner.go:130] > # Number of samples to collect per million spans.
	I1103 20:48:50.897629   98430 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1103 20:48:50.897635   98430 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1103 20:48:50.897641   98430 command_runner.go:130] > [crio.stats]
	I1103 20:48:50.897647   98430 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1103 20:48:50.897655   98430 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1103 20:48:50.897659   98430 command_runner.go:130] > # stats_collection_period = 0
	I1103 20:48:50.897718   98430 cni.go:84] Creating CNI manager for ""
	I1103 20:48:50.897727   98430 cni.go:136] 2 nodes found, recommending kindnet
	I1103 20:48:50.897735   98430 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1103 20:48:50.897752   98430 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-280480 NodeName:multinode-280480-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1103 20:48:50.897867   98430 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-280480-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1103 20:48:50.897917   98430 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-280480-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-280480 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1103 20:48:50.897965   98430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1103 20:48:50.905304   98430 command_runner.go:130] > kubeadm
	I1103 20:48:50.905318   98430 command_runner.go:130] > kubectl
	I1103 20:48:50.905324   98430 command_runner.go:130] > kubelet
	I1103 20:48:50.905923   98430 binaries.go:44] Found k8s binaries, skipping transfer
	I1103 20:48:50.905978   98430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1103 20:48:50.913417   98430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1103 20:48:50.928177   98430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1103 20:48:50.942457   98430 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1103 20:48:50.945278   98430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1103 20:48:50.954343   98430 host.go:66] Checking if "multinode-280480" exists ...
	I1103 20:48:50.954649   98430 config.go:182] Loaded profile config "multinode-280480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:48:50.954615   98430 start.go:304] JoinCluster: &{Name:multinode-280480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-280480 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:48:50.954711   98430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1103 20:48:50.954756   98430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:48:50.970366   98430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:48:51.106742   98430 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0h0tlw.3ucx4tthnxz77fhi --discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df 
	I1103 20:48:51.106795   98430 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1103 20:48:51.106831   98430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0h0tlw.3ucx4tthnxz77fhi --discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-280480-m02"
	I1103 20:48:51.138964   98430 command_runner.go:130] > [preflight] Running pre-flight checks
	I1103 20:48:51.165653   98430 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1103 20:48:51.165681   98430 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1046-gcp
	I1103 20:48:51.165690   98430 command_runner.go:130] > OS: Linux
	I1103 20:48:51.165699   98430 command_runner.go:130] > CGROUPS_CPU: enabled
	I1103 20:48:51.165708   98430 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1103 20:48:51.165719   98430 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1103 20:48:51.165730   98430 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1103 20:48:51.165739   98430 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1103 20:48:51.165756   98430 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1103 20:48:51.165767   98430 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1103 20:48:51.165778   98430 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1103 20:48:51.165789   98430 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1103 20:48:51.244376   98430 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1103 20:48:51.244435   98430 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1103 20:48:51.267797   98430 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1103 20:48:51.267848   98430 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1103 20:48:51.267859   98430 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1103 20:48:51.338703   98430 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1103 20:48:53.851353   98430 command_runner.go:130] > This node has joined the cluster:
	I1103 20:48:53.851381   98430 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1103 20:48:53.851391   98430 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1103 20:48:53.851402   98430 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1103 20:48:53.853841   98430 command_runner.go:130] ! W1103 20:48:51.138651    1111 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1103 20:48:53.853879   98430 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1103 20:48:53.853895   98430 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1103 20:48:53.853915   98430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0h0tlw.3ucx4tthnxz77fhi --discovery-token-ca-cert-hash sha256:1257a42a1bc28f8e43e186124137176ba467e34a8eab3dd89eabd155069822df --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-280480-m02": (2.747068898s)
	I1103 20:48:53.853930   98430 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1103 20:48:54.009554   98430 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1103 20:48:54.009598   98430 start.go:306] JoinCluster complete in 3.05498198s
	I1103 20:48:54.009611   98430 cni.go:84] Creating CNI manager for ""
	I1103 20:48:54.009618   98430 cni.go:136] 2 nodes found, recommending kindnet
	I1103 20:48:54.009670   98430 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1103 20:48:54.012914   98430 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1103 20:48:54.012936   98430 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1103 20:48:54.012945   98430 command_runner.go:130] > Device: 33h/51d	Inode: 544546      Links: 1
	I1103 20:48:54.012954   98430 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1103 20:48:54.012962   98430 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1103 20:48:54.012971   98430 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1103 20:48:54.012983   98430 command_runner.go:130] > Change: 2023-11-03 20:29:19.703825044 +0000
	I1103 20:48:54.012996   98430 command_runner.go:130] >  Birth: 2023-11-03 20:29:19.679822742 +0000
	I1103 20:48:54.013045   98430 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1103 20:48:54.013057   98430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1103 20:48:54.028288   98430 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1103 20:48:54.217126   98430 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1103 20:48:54.220184   98430 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1103 20:48:54.222954   98430 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1103 20:48:54.234510   98430 command_runner.go:130] > daemonset.apps/kindnet configured
	I1103 20:48:54.240045   98430 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:48:54.240257   98430 kapi.go:59] client config for multinode-280480: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.crt", KeyFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.key", CAFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bb20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1103 20:48:54.240629   98430 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1103 20:48:54.240647   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:54.240658   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:54.240670   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:54.242603   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:54.242621   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:54.242632   98430 round_trippers.go:580]     Audit-Id: 517ec441-3144-42d3-b1ee-0008533a2f9b
	I1103 20:48:54.242641   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:54.242650   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:54.242659   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:54.242666   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:54.242672   98430 round_trippers.go:580]     Content-Length: 291
	I1103 20:48:54.242681   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:54 GMT
	I1103 20:48:54.242702   98430 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a22d9e76-d717-469e-a0fe-24082478dbf0","resourceVersion":"407","creationTimestamp":"2023-11-03T20:47:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1103 20:48:54.242784   98430 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-280480" context rescaled to 1 replicas
	I1103 20:48:54.242817   98430 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1103 20:48:54.244971   98430 out.go:177] * Verifying Kubernetes components...
	I1103 20:48:54.246773   98430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:48:54.260465   98430 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:48:54.260793   98430 kapi.go:59] client config for multinode-280480: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.crt", KeyFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/profiles/multinode-280480/client.key", CAFile:"/home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bb20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1103 20:48:54.261128   98430 node_ready.go:35] waiting up to 6m0s for node "multinode-280480-m02" to be "Ready" ...
	I1103 20:48:54.261226   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480-m02
	I1103 20:48:54.261238   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:54.261250   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:54.261260   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:54.263703   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:54.263723   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:54.263733   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:54 GMT
	I1103 20:48:54.263742   98430 round_trippers.go:580]     Audit-Id: f094b8a4-b897-41c6-84f2-3fe17afe7461
	I1103 20:48:54.263751   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:54.263759   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:54.263766   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:54.263774   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:54.263921   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480-m02","uid":"b7eca286-cfcc-4dba-b8ad-c96f34ba596b","resourceVersion":"445","creationTimestamp":"2023-11-03T20:48:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1103 20:48:54.264382   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480-m02
	I1103 20:48:54.264395   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:54.264406   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:54.264413   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:54.266697   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:54.266716   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:54.266725   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:54.266732   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:54.266740   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:54.266752   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:54.266760   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:54 GMT
	I1103 20:48:54.266769   98430 round_trippers.go:580]     Audit-Id: 18062a17-b0fd-452d-8b55-c3ca6b322700
	I1103 20:48:54.267283   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480-m02","uid":"b7eca286-cfcc-4dba-b8ad-c96f34ba596b","resourceVersion":"445","creationTimestamp":"2023-11-03T20:48:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1103 20:48:54.768166   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480-m02
	I1103 20:48:54.768187   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:54.768195   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:54.768201   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:54.770164   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:54.770186   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:54.770196   98430 round_trippers.go:580]     Audit-Id: 132cca8b-e774-4a7e-999e-94bf17f29d37
	I1103 20:48:54.770205   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:54.770214   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:54.770223   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:54.770233   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:54.770245   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:54 GMT
	I1103 20:48:54.770355   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480-m02","uid":"b7eca286-cfcc-4dba-b8ad-c96f34ba596b","resourceVersion":"445","creationTimestamp":"2023-11-03T20:48:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1103 20:48:55.268008   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480-m02
	I1103 20:48:55.268028   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.268036   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.268043   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.270171   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:55.270194   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.270203   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.270212   98430 round_trippers.go:580]     Audit-Id: 6b7e7943-65d0-4fcb-af32-4928c0bf6fe3
	I1103 20:48:55.270223   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.270230   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.270238   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.270247   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.270371   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480-m02","uid":"b7eca286-cfcc-4dba-b8ad-c96f34ba596b","resourceVersion":"464","creationTimestamp":"2023-11-03T20:48:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1103 20:48:55.270694   98430 node_ready.go:49] node "multinode-280480-m02" has status "Ready":"True"
	I1103 20:48:55.270715   98430 node_ready.go:38] duration metric: took 1.009567063s waiting for node "multinode-280480-m02" to be "Ready" ...
	I1103 20:48:55.270725   98430 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1103 20:48:55.270783   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1103 20:48:55.270792   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.270799   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.270806   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.273633   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:55.273647   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.273654   98430 round_trippers.go:580]     Audit-Id: d99af97b-b06b-4c93-a334-9323a4ca14b5
	I1103 20:48:55.273660   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.273668   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.273676   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.273687   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.273695   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.274256   98430 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"464"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"403","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1103 20:48:55.276363   98430 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rxqxb" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.276448   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rxqxb
	I1103 20:48:55.276457   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.276464   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.276473   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.278159   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:55.278173   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.278181   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.278190   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.278198   98430 round_trippers.go:580]     Audit-Id: 55d9d6f5-174a-40c1-ae27-9496f8e2b652
	I1103 20:48:55.278211   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.278222   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.278232   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.278335   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rxqxb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c6417a12-b154-42c3-ac95-a45396156b0e","resourceVersion":"403","creationTimestamp":"2023-11-03T20:48:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfd15fc0-b82f-4ac0-a436-3489d2d0b53c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1103 20:48:55.278750   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:55.278764   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.278770   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.278776   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.280235   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:55.280249   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.280255   98430 round_trippers.go:580]     Audit-Id: 9541e182-4741-4e6f-84a1-cab798bbea4f
	I1103 20:48:55.280260   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.280266   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.280271   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.280279   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.280287   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.280460   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:55.280736   98430 pod_ready.go:92] pod "coredns-5dd5756b68-rxqxb" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:55.280753   98430 pod_ready.go:81] duration metric: took 4.370629ms waiting for pod "coredns-5dd5756b68-rxqxb" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.280763   98430 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.280808   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-280480
	I1103 20:48:55.280818   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.280828   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.280839   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.282357   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:55.282370   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.282376   98430 round_trippers.go:580]     Audit-Id: 199b45fb-e937-4ec7-886d-1dda0aa28dce
	I1103 20:48:55.282384   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.282392   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.282400   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.282413   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.282423   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.282513   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-280480","namespace":"kube-system","uid":"064baf76-3464-4729-ac2b-cd0fa19b7914","resourceVersion":"279","creationTimestamp":"2023-11-03T20:47:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2934b8bf856873a89bdd628d2cb9fe01","kubernetes.io/config.mirror":"2934b8bf856873a89bdd628d2cb9fe01","kubernetes.io/config.seen":"2023-11-03T20:47:52.011468428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:47:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1103 20:48:55.282811   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:55.282822   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.282829   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.282835   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.284279   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:55.284291   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.284297   98430 round_trippers.go:580]     Audit-Id: 9b83277a-a2ff-4cb4-a658-d28f0c7c637c
	I1103 20:48:55.284302   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.284307   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.284314   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.284321   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.284334   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.284482   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:55.284750   98430 pod_ready.go:92] pod "etcd-multinode-280480" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:55.284765   98430 pod_ready.go:81] duration metric: took 3.994357ms waiting for pod "etcd-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.284781   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.284828   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-280480
	I1103 20:48:55.284838   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.284848   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.284859   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.286336   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:55.286350   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.286356   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.286361   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.286367   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.286375   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.286383   98430 round_trippers.go:580]     Audit-Id: 3f4f4793-b149-4b08-8d3a-164875ecf58a
	I1103 20:48:55.286393   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.286560   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-280480","namespace":"kube-system","uid":"6f42eff1-84c4-40a2-a107-c04dcc981ab2","resourceVersion":"278","creationTimestamp":"2023-11-03T20:47:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"0759901116a5c84d9728f196af5ff715","kubernetes.io/config.mirror":"0759901116a5c84d9728f196af5ff715","kubernetes.io/config.seen":"2023-11-03T20:47:52.011472688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:47:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1103 20:48:55.286927   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:55.286940   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.286947   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.286953   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.288495   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:55.288511   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.288520   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.288529   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.288538   98430 round_trippers.go:580]     Audit-Id: 6077381c-b9bf-4e66-9390-d3f216001400
	I1103 20:48:55.288551   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.288564   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.288576   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.288699   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:55.289038   98430 pod_ready.go:92] pod "kube-apiserver-multinode-280480" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:55.289055   98430 pod_ready.go:81] duration metric: took 4.263268ms waiting for pod "kube-apiserver-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.289066   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.289125   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-280480
	I1103 20:48:55.289136   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.289146   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.289159   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.290665   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:55.290684   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.290693   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.290703   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.290711   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.290723   98430 round_trippers.go:580]     Audit-Id: be65e768-dd84-4578-a3b1-e5ebf563439a
	I1103 20:48:55.290733   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.290745   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.290879   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-280480","namespace":"kube-system","uid":"04b47790-633d-4d65-8791-33dd357dec71","resourceVersion":"283","creationTimestamp":"2023-11-03T20:47:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7a398ee5e88ea41c5429b679b15d57c9","kubernetes.io/config.mirror":"7a398ee5e88ea41c5429b679b15d57c9","kubernetes.io/config.seen":"2023-11-03T20:47:46.573400973Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:47:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1103 20:48:55.291221   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:55.291233   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.291239   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.291245   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.292677   98430 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1103 20:48:55.292694   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.292704   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.292712   98430 round_trippers.go:580]     Audit-Id: 6bbebb7b-d2e9-4c84-a9ac-8424537c7a1a
	I1103 20:48:55.292721   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.292730   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.292739   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.292751   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.292868   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:55.293208   98430 pod_ready.go:92] pod "kube-controller-manager-multinode-280480" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:55.293225   98430 pod_ready.go:81] duration metric: took 4.148002ms waiting for pod "kube-controller-manager-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.293235   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d44k5" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.468487   98430 request.go:629] Waited for 175.200268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d44k5
	I1103 20:48:55.468578   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d44k5
	I1103 20:48:55.468590   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.468602   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.468615   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.470679   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:55.470703   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.470710   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.470715   98430 round_trippers.go:580]     Audit-Id: 0ca6051b-3d1a-43d6-80fd-b5874f1e7c4e
	I1103 20:48:55.470725   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.470730   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.470735   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.470740   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.470877   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d44k5","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1b71e80-829a-46a0-b8da-41304c8b61d0","resourceVersion":"460","creationTimestamp":"2023-11-03T20:48:53Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2bb28b36-9e85-4ebb-b884-5447612fba2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2bb28b36-9e85-4ebb-b884-5447612fba2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1103 20:48:55.668735   98430 request.go:629] Waited for 197.36415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-280480-m02
	I1103 20:48:55.668786   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480-m02
	I1103 20:48:55.668791   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.668799   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.668815   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.670933   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:55.670955   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.670965   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.670973   98430 round_trippers.go:580]     Audit-Id: f3f92664-01f1-477d-bc66-b83c9d47040c
	I1103 20:48:55.670980   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.670988   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.671007   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.671020   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.671137   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480-m02","uid":"b7eca286-cfcc-4dba-b8ad-c96f34ba596b","resourceVersion":"464","creationTimestamp":"2023-11-03T20:48:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1103 20:48:55.671448   98430 pod_ready.go:92] pod "kube-proxy-d44k5" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:55.671464   98430 pod_ready.go:81] duration metric: took 378.218442ms waiting for pod "kube-proxy-d44k5" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.671473   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lsfmj" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:55.868906   98430 request.go:629] Waited for 197.374152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsfmj
	I1103 20:48:55.868984   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lsfmj
	I1103 20:48:55.868996   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:55.869007   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:55.869019   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:55.872353   98430 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1103 20:48:55.872372   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:55.872390   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:55.872398   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:55.872406   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:55.872415   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:55.872442   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:55 GMT
	I1103 20:48:55.872455   98430 round_trippers.go:580]     Audit-Id: b377fba2-1bde-4391-89bd-1917e3e0d30d
	I1103 20:48:55.872593   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lsfmj","generateName":"kube-proxy-","namespace":"kube-system","uid":"09340714-82ee-4eb4-9884-b262fa594650","resourceVersion":"364","creationTimestamp":"2023-11-03T20:48:04Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2bb28b36-9e85-4ebb-b884-5447612fba2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:48:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2bb28b36-9e85-4ebb-b884-5447612fba2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1103 20:48:56.068339   98430 request.go:629] Waited for 195.343751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:56.068387   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:56.068392   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:56.068399   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:56.068409   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:56.070435   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:56.070451   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:56.070459   98430 round_trippers.go:580]     Audit-Id: 50e48d8e-8e77-4586-b835-d535bfd86980
	I1103 20:48:56.070464   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:56.070470   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:56.070475   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:56.070489   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:56.070497   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:56 GMT
	I1103 20:48:56.070641   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:56.070969   98430 pod_ready.go:92] pod "kube-proxy-lsfmj" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:56.070988   98430 pod_ready.go:81] duration metric: took 399.506164ms waiting for pod "kube-proxy-lsfmj" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:56.071000   98430 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:56.268437   98430 request.go:629] Waited for 197.348189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-280480
	I1103 20:48:56.268510   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-280480
	I1103 20:48:56.268520   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:56.268531   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:56.268545   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:56.270587   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:56.270608   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:56.270620   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:56.270628   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:56.270634   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:56.270639   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:56.270644   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:56 GMT
	I1103 20:48:56.270651   98430 round_trippers.go:580]     Audit-Id: 850ef960-6314-4e66-8286-a8c56df595bc
	I1103 20:48:56.270790   98430 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-280480","namespace":"kube-system","uid":"939a5f81-e4a2-4840-b9a4-e2636be8b7cb","resourceVersion":"287","creationTimestamp":"2023-11-03T20:47:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d5473c5ead1ca7dc4d20c46b31b7dc2","kubernetes.io/config.mirror":"3d5473c5ead1ca7dc4d20c46b31b7dc2","kubernetes.io/config.seen":"2023-11-03T20:47:46.573393114Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-03T20:47:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1103 20:48:56.468608   98430 request.go:629] Waited for 197.377529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:56.468662   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-280480
	I1103 20:48:56.468682   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:56.468698   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:56.468707   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:56.470986   98430 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1103 20:48:56.471007   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:56.471017   98430 round_trippers.go:580]     Audit-Id: ffaf7686-d8ed-40bd-bf8a-99b307cf3d30
	I1103 20:48:56.471025   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:56.471034   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:56.471046   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:56.471065   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:56.471074   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:56 GMT
	I1103 20:48:56.471210   98430 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-03T20:47:49Z","fieldsType":"FieldsV1 [truncated 5954 chars]
	I1103 20:48:56.471625   98430 pod_ready.go:92] pod "kube-scheduler-multinode-280480" in "kube-system" namespace has status "Ready":"True"
	I1103 20:48:56.471643   98430 pod_ready.go:81] duration metric: took 400.634662ms waiting for pod "kube-scheduler-multinode-280480" in "kube-system" namespace to be "Ready" ...
	I1103 20:48:56.471658   98430 pod_ready.go:38] duration metric: took 1.20091801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1103 20:48:56.471679   98430 system_svc.go:44] waiting for kubelet service to be running ....
	I1103 20:48:56.471729   98430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:48:56.482248   98430 system_svc.go:56] duration metric: took 10.566811ms WaitForService to wait for kubelet.
	I1103 20:48:56.482268   98430 kubeadm.go:581] duration metric: took 2.239425631s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1103 20:48:56.482289   98430 node_conditions.go:102] verifying NodePressure condition ...
	I1103 20:48:56.668705   98430 request.go:629] Waited for 186.344726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1103 20:48:56.668756   98430 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1103 20:48:56.668763   98430 round_trippers.go:469] Request Headers:
	I1103 20:48:56.668775   98430 round_trippers.go:473]     Accept: application/json, */*
	I1103 20:48:56.668788   98430 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1103 20:48:56.672064   98430 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1103 20:48:56.672086   98430 round_trippers.go:577] Response Headers:
	I1103 20:48:56.672095   98430 round_trippers.go:580]     Date: Fri, 03 Nov 2023 20:48:56 GMT
	I1103 20:48:56.672103   98430 round_trippers.go:580]     Audit-Id: 456a9e94-03b1-4fb4-a54f-39cce9cab823
	I1103 20:48:56.672109   98430 round_trippers.go:580]     Cache-Control: no-cache, private
	I1103 20:48:56.672117   98430 round_trippers.go:580]     Content-Type: application/json
	I1103 20:48:56.672125   98430 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0227f908-e19d-4210-b398-b6c69f39ea1b
	I1103 20:48:56.672134   98430 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0d4d0ace-c452-4eb4-ab6d-6df16f3d267a
	I1103 20:48:56.672371   98430 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"465"},"items":[{"metadata":{"name":"multinode-280480","uid":"0eda9ec0-d571-45ae-838e-8578c65f8ab4","resourceVersion":"387","creationTimestamp":"2023-11-03T20:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-280480","kubernetes.io/os":"linux","minikube.k8s.io/commit":"44765b58c8440feed3c9edc110a2d06dc722956e","minikube.k8s.io/name":"multinode-280480","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_03T20_47_52_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 12295 chars]
	I1103 20:48:56.672881   98430 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1103 20:48:56.672899   98430 node_conditions.go:123] node cpu capacity is 8
	I1103 20:48:56.672912   98430 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1103 20:48:56.672918   98430 node_conditions.go:123] node cpu capacity is 8
	I1103 20:48:56.672925   98430 node_conditions.go:105] duration metric: took 190.629735ms to run NodePressure ...
	I1103 20:48:56.672940   98430 start.go:228] waiting for startup goroutines ...
	I1103 20:48:56.672976   98430 start.go:242] writing updated cluster config ...
	I1103 20:48:56.673229   98430 ssh_runner.go:195] Run: rm -f paused
	I1103 20:48:56.716304   98430 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1103 20:48:56.719095   98430 out.go:177] * Done! kubectl is now configured to use "multinode-280480" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 03 20:48:36 multinode-280480 crio[952]: time="2023-11-03 20:48:36.370554176Z" level=info msg="Starting container: c3e4864574835078a78cad31385033091c173df3f7a2507195fa96edddbda5d5" id=fc8ab995-15cd-4923-b32d-6a6de469ab04 name=/runtime.v1.RuntimeService/StartContainer
	Nov 03 20:48:36 multinode-280480 crio[952]: time="2023-11-03 20:48:36.371225483Z" level=info msg="Created container c8aaf5f08f1d0b762536fa9fc5fbcc100738735181102a1c4ca47d6942b8ba59: kube-system/storage-provisioner/storage-provisioner" id=dcc01303-fb1e-4d9a-a76d-e4e478fb984e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 03 20:48:36 multinode-280480 crio[952]: time="2023-11-03 20:48:36.371655780Z" level=info msg="Starting container: c8aaf5f08f1d0b762536fa9fc5fbcc100738735181102a1c4ca47d6942b8ba59" id=7a32cd89-8cbc-4b61-bf49-d2ce4c77728d name=/runtime.v1.RuntimeService/StartContainer
	Nov 03 20:48:36 multinode-280480 crio[952]: time="2023-11-03 20:48:36.379091211Z" level=info msg="Started container" PID=2348 containerID=c3e4864574835078a78cad31385033091c173df3f7a2507195fa96edddbda5d5 description=kube-system/coredns-5dd5756b68-rxqxb/coredns id=fc8ab995-15cd-4923-b32d-6a6de469ab04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f28612dc54f1368e0d4eafe8610ecb98d892c90fc88d428ae120a74fc0805194
	Nov 03 20:48:36 multinode-280480 crio[952]: time="2023-11-03 20:48:36.380540965Z" level=info msg="Started container" PID=2349 containerID=c8aaf5f08f1d0b762536fa9fc5fbcc100738735181102a1c4ca47d6942b8ba59 description=kube-system/storage-provisioner/storage-provisioner id=7a32cd89-8cbc-4b61-bf49-d2ce4c77728d name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d566902a547bd6f49a1734a88e5c743cb9da7ced2b58ce95811c537a2dcb4f9
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.686963783Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-z5cz8/POD" id=1527cb08-af7c-48e0-857a-d4dc25e6ed87 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.687053011Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.700522697Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-z5cz8 Namespace:default ID:2cd785fc0954b6c4748da9bec0fa34ee7f3a150a482a71c4435b528f6753fe4b UID:700b8a6c-39f9-464c-84e3-b5a59b4e9900 NetNS:/var/run/netns/5205b5c5-57ea-4326-bcb8-2d0f589aa481 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.700562931Z" level=info msg="Adding pod default_busybox-5bc68d56bd-z5cz8 to CNI network \"kindnet\" (type=ptp)"
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.708857783Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-z5cz8 Namespace:default ID:2cd785fc0954b6c4748da9bec0fa34ee7f3a150a482a71c4435b528f6753fe4b UID:700b8a6c-39f9-464c-84e3-b5a59b4e9900 NetNS:/var/run/netns/5205b5c5-57ea-4326-bcb8-2d0f589aa481 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.708986338Z" level=info msg="Checking pod default_busybox-5bc68d56bd-z5cz8 for CNI network kindnet (type=ptp)"
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.739399865Z" level=info msg="Ran pod sandbox 2cd785fc0954b6c4748da9bec0fa34ee7f3a150a482a71c4435b528f6753fe4b with infra container: default/busybox-5bc68d56bd-z5cz8/POD" id=1527cb08-af7c-48e0-857a-d4dc25e6ed87 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.740521708Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=fd68817f-e875-44d9-8fb3-52aeed672c8f name=/runtime.v1.ImageService/ImageStatus
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.740788365Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=fd68817f-e875-44d9-8fb3-52aeed672c8f name=/runtime.v1.ImageService/ImageStatus
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.741597491Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=49793c9a-884a-4613-94dc-eb7c8dcb15ec name=/runtime.v1.ImageService/PullImage
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.746074073Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 03 20:48:57 multinode-280480 crio[952]: time="2023-11-03 20:48:57.911699123Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 03 20:48:58 multinode-280480 crio[952]: time="2023-11-03 20:48:58.332035937Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=49793c9a-884a-4613-94dc-eb7c8dcb15ec name=/runtime.v1.ImageService/PullImage
	Nov 03 20:48:58 multinode-280480 crio[952]: time="2023-11-03 20:48:58.333008947Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=766512e4-6d15-4481-a1b0-eaf079460879 name=/runtime.v1.ImageService/ImageStatus
	Nov 03 20:48:58 multinode-280480 crio[952]: time="2023-11-03 20:48:58.333579822Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=766512e4-6d15-4481-a1b0-eaf079460879 name=/runtime.v1.ImageService/ImageStatus
	Nov 03 20:48:58 multinode-280480 crio[952]: time="2023-11-03 20:48:58.334422477Z" level=info msg="Creating container: default/busybox-5bc68d56bd-z5cz8/busybox" id=9b50f344-5481-41b2-9df4-f4d8b0384e9a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 03 20:48:58 multinode-280480 crio[952]: time="2023-11-03 20:48:58.334497378Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 03 20:48:58 multinode-280480 crio[952]: time="2023-11-03 20:48:58.405845786Z" level=info msg="Created container 7e17cefe09f5a18231f299f90caf5a9809ad072a657b572652eefeb8254b2631: default/busybox-5bc68d56bd-z5cz8/busybox" id=9b50f344-5481-41b2-9df4-f4d8b0384e9a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 03 20:48:58 multinode-280480 crio[952]: time="2023-11-03 20:48:58.406410727Z" level=info msg="Starting container: 7e17cefe09f5a18231f299f90caf5a9809ad072a657b572652eefeb8254b2631" id=6b15d6be-2a45-49ed-a98f-3aa6603f1f52 name=/runtime.v1.RuntimeService/StartContainer
	Nov 03 20:48:58 multinode-280480 crio[952]: time="2023-11-03 20:48:58.414798237Z" level=info msg="Started container" PID=2519 containerID=7e17cefe09f5a18231f299f90caf5a9809ad072a657b572652eefeb8254b2631 description=default/busybox-5bc68d56bd-z5cz8/busybox id=6b15d6be-2a45-49ed-a98f-3aa6603f1f52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2cd785fc0954b6c4748da9bec0fa34ee7f3a150a482a71c4435b528f6753fe4b
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e17cefe09f5a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   2cd785fc0954b       busybox-5bc68d56bd-z5cz8
	c3e4864574835       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      26 seconds ago       Running             coredns                   0                   f28612dc54f13       coredns-5dd5756b68-rxqxb
	c8aaf5f08f1d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      26 seconds ago       Running             storage-provisioner       0                   1d566902a547b       storage-provisioner
	75e5ba2b758e4       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      57 seconds ago       Running             kindnet-cni               0                   f0729de71f4d4       kindnet-4khv5
	38c9fa18405ec       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      57 seconds ago       Running             kube-proxy                0                   5fe63db6cdb4d       kube-proxy-lsfmj
	f3e43b89225be       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      About a minute ago   Running             kube-apiserver            0                   72efd5b4ca735       kube-apiserver-multinode-280480
	e19f6909ac2af       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      About a minute ago   Running             kube-controller-manager   0                   8df6007c0acac       kube-controller-manager-multinode-280480
	6dadb4222dcb1       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      About a minute ago   Running             kube-scheduler            0                   0aee1e8252dfa       kube-scheduler-multinode-280480
	a985c3eb86eef       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   8dc1b041750ca       etcd-multinode-280480
	
	* 
	* ==> coredns [c3e4864574835078a78cad31385033091c173df3f7a2507195fa96edddbda5d5] <==
	* [INFO] 10.244.1.2:41118 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082951s
	[INFO] 10.244.0.3:38722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104243s
	[INFO] 10.244.0.3:39611 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001540164s
	[INFO] 10.244.0.3:35913 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008788s
	[INFO] 10.244.0.3:56556 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078238s
	[INFO] 10.244.0.3:52784 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001138939s
	[INFO] 10.244.0.3:36517 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006468s
	[INFO] 10.244.0.3:51469 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062672s
	[INFO] 10.244.0.3:58972 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068461s
	[INFO] 10.244.1.2:55746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116s
	[INFO] 10.244.1.2:54631 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091263s
	[INFO] 10.244.1.2:40139 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070762s
	[INFO] 10.244.1.2:52868 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083015s
	[INFO] 10.244.0.3:48098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096755s
	[INFO] 10.244.0.3:58627 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092633s
	[INFO] 10.244.0.3:55825 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055126s
	[INFO] 10.244.0.3:55590 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075397s
	[INFO] 10.244.1.2:58932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108609s
	[INFO] 10.244.1.2:36232 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121161s
	[INFO] 10.244.1.2:57263 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090607s
	[INFO] 10.244.1.2:41251 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064709s
	[INFO] 10.244.0.3:37174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101102s
	[INFO] 10.244.0.3:42101 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094646s
	[INFO] 10.244.0.3:37930 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00006967s
	[INFO] 10.244.0.3:39663 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000053476s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-280480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-280480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=44765b58c8440feed3c9edc110a2d06dc722956e
	                    minikube.k8s.io/name=multinode-280480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_03T20_47_52_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Nov 2023 20:47:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-280480
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Nov 2023 20:48:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Nov 2023 20:48:35 +0000   Fri, 03 Nov 2023 20:47:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Nov 2023 20:48:35 +0000   Fri, 03 Nov 2023 20:47:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Nov 2023 20:48:35 +0000   Fri, 03 Nov 2023 20:47:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Nov 2023 20:48:35 +0000   Fri, 03 Nov 2023 20:48:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-280480
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 726c62471a31459fb34e3e8f89079e3e
	  System UUID:                5f2dca95-a061-4a42-ac2e-2d06b99323c5
	  Boot ID:                    399e003d-4e5c-4eac-b4ee-6a616fb3f737
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-z5cz8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-rxqxb                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     57s
	  kube-system                 etcd-multinode-280480                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kindnet-4khv5                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      58s
	  kube-system                 kube-apiserver-multinode-280480             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-multinode-280480    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-lsfmj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-multinode-280480             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 57s   kube-proxy       
	  Normal  Starting                 70s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s   kubelet          Node multinode-280480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s   kubelet          Node multinode-280480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s   kubelet          Node multinode-280480 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s   node-controller  Node multinode-280480 event: Registered Node multinode-280480 in Controller
	  Normal  NodeReady                27s   kubelet          Node multinode-280480 status is now: NodeReady
	
	
	Name:               multinode-280480-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-280480-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Nov 2023 20:48:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-280480-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Nov 2023 20:48:55 +0000   Fri, 03 Nov 2023 20:48:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Nov 2023 20:48:55 +0000   Fri, 03 Nov 2023 20:48:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Nov 2023 20:48:55 +0000   Fri, 03 Nov 2023 20:48:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Nov 2023 20:48:55 +0000   Fri, 03 Nov 2023 20:48:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-280480-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2198da03ab74f3fa354d63b6fa429f3
	  System UUID:                3d98742d-7914-4f15-b0c3-f6600d862c88
	  Boot ID:                    399e003d-4e5c-4eac-b4ee-6a616fb3f737
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5rnbm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-kjd4r               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-d44k5            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 8s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9s (x5 over 11s)  kubelet          Node multinode-280480-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 11s)  kubelet          Node multinode-280480-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 11s)  kubelet          Node multinode-280480-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                node-controller  Node multinode-280480-m02 event: Registered Node multinode-280480-m02 in Controller
	  Normal  NodeReady                7s                kubelet          Node multinode-280480-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004971] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007941] FS-Cache: N-cookie d=00000000c241a6d9{9p.inode} n=0000000043748617
	[  +0.009108] FS-Cache: N-key=[8] '78a00f0200000000'
	[  +0.307015] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006750] FS-Cache: O-cookie d=00000000c241a6d9{9p.inode} n=00000000a199da0f
	[  +0.007353] FS-Cache: O-key=[8] '82a00f0200000000'
	[  +0.004923] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006579] FS-Cache: N-cookie d=00000000c241a6d9{9p.inode} n=00000000c0333615
	[  +0.008721] FS-Cache: N-key=[8] '82a00f0200000000'
	[Nov 3 20:38] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 3 20:40] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[  +1.004199] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[  +2.015806] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[  +4.159628] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[  +8.191169] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[ +16.126457] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	[Nov 3 20:41] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 1e 89 f8 b0 17 6d ea 16 18 96 fb d5 08 00
	
	* 
	* ==> etcd [a985c3eb86eef7861a7cfe81da335cf5603b64a15e39b3551d4809182db27493] <==
	* {"level":"info","ts":"2023-11-03T20:47:47.297265Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-03T20:47:47.297669Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-03T20:47:47.297705Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-03T20:47:47.297415Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-03T20:47:47.297745Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-03T20:47:47.917495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-03T20:47:47.917536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-03T20:47:47.917567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-03T20:47:47.917582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-03T20:47:47.917588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-03T20:47:47.917596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-03T20:47:47.917602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-03T20:47:47.91846Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-280480 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-03T20:47:47.918484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-03T20:47:47.918507Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-03T20:47:47.91861Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-03T20:47:47.918642Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-03T20:47:47.918532Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-03T20:47:47.919174Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-03T20:47:47.919346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-03T20:47:47.919379Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-03T20:47:47.91986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-03T20:47:47.920006Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-03T20:48:43.127789Z","caller":"traceutil/trace.go:171","msg":"trace[655552228] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"121.947331ms","start":"2023-11-03T20:48:43.005822Z","end":"2023-11-03T20:48:43.12777Z","steps":["trace[655552228] 'process raft request'  (duration: 121.828293ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-03T20:48:44.625971Z","caller":"traceutil/trace.go:171","msg":"trace[487391257] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"106.889688ms","start":"2023-11-03T20:48:44.519037Z","end":"2023-11-03T20:48:44.625927Z","steps":["trace[487391257] 'process raft request'  (duration: 106.738425ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:49:02 up 31 min,  0 users,  load average: 0.81, 1.13, 0.79
	Linux multinode-280480 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [75e5ba2b758e4177de5847d7f558e365054f65ec03dc5a9837cbf6a143cf0864] <==
	* I1103 20:48:05.199085       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1103 20:48:05.199152       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1103 20:48:05.199287       1 main.go:116] setting mtu 1500 for CNI 
	I1103 20:48:05.199300       1 main.go:146] kindnetd IP family: "ipv4"
	I1103 20:48:05.199322       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1103 20:48:35.525044       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1103 20:48:35.532712       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1103 20:48:35.532739       1 main.go:227] handling current node
	I1103 20:48:45.547606       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1103 20:48:45.547636       1 main.go:227] handling current node
	I1103 20:48:55.565419       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1103 20:48:55.565609       1 main.go:227] handling current node
	I1103 20:48:55.565662       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1103 20:48:55.565695       1 main.go:250] Node multinode-280480-m02 has CIDR [10.244.1.0/24] 
	I1103 20:48:55.565922       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [f3e43b89225befed808ca71f907255c3d98b0b5912c83e9bd090ca94324be5eb] <==
	* I1103 20:47:49.289025       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1103 20:47:49.289170       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1103 20:47:49.289230       1 shared_informer.go:318] Caches are synced for configmaps
	I1103 20:47:49.289331       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1103 20:47:49.289443       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1103 20:47:49.289528       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1103 20:47:49.292172       1 controller.go:624] quota admission added evaluator for: namespaces
	E1103 20:47:49.300451       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1103 20:47:49.505333       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1103 20:47:50.146388       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1103 20:47:50.149640       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1103 20:47:50.149658       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1103 20:47:50.518446       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1103 20:47:50.549367       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1103 20:47:50.606849       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1103 20:47:50.612177       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1103 20:47:50.613207       1 controller.go:624] quota admission added evaluator for: endpoints
	I1103 20:47:50.617034       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1103 20:47:51.306266       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1103 20:47:51.958315       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1103 20:47:51.967133       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1103 20:47:51.975896       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1103 20:48:04.703755       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1103 20:48:04.703755       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1103 20:48:05.063220       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [e19f6909ac2af2aa5891bba6f524a9a64422166e0545dd5a1965cc82baaa9b2a] <==
	* I1103 20:48:35.965077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.455µs"
	I1103 20:48:35.973996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.126µs"
	I1103 20:48:37.191356       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.784821ms"
	I1103 20:48:37.191471       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.945µs"
	I1103 20:48:39.710039       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1103 20:48:53.381205       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-280480-m02\" does not exist"
	I1103 20:48:53.388287       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-280480-m02" podCIDRs=["10.244.1.0/24"]
	I1103 20:48:53.394193       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kjd4r"
	I1103 20:48:53.394220       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d44k5"
	I1103 20:48:54.712165       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-280480-m02"
	I1103 20:48:54.712171       1 event.go:307] "Event occurred" object="multinode-280480-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-280480-m02 event: Registered Node multinode-280480-m02 in Controller"
	I1103 20:48:55.026290       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-280480-m02"
	I1103 20:48:57.366703       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1103 20:48:57.374029       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-5rnbm"
	I1103 20:48:57.378193       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-z5cz8"
	I1103 20:48:57.385147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.487279ms"
	I1103 20:48:57.390449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.193523ms"
	I1103 20:48:57.390515       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.938µs"
	I1103 20:48:57.392617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.866µs"
	I1103 20:48:57.396305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.165µs"
	I1103 20:48:58.918222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.596305ms"
	I1103 20:48:58.918293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.994µs"
	I1103 20:48:59.219799       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.843502ms"
	I1103 20:48:59.219900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.043µs"
	I1103 20:48:59.721572       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-5rnbm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-5rnbm"
	
	* 
	* ==> kube-proxy [38c9fa18405ecb00a542225dd88747ca654937a71c5ecb1893f63cf387ddb63d] <==
	* I1103 20:48:05.301047       1 server_others.go:69] "Using iptables proxy"
	I1103 20:48:05.392288       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1103 20:48:05.423739       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1103 20:48:05.426051       1 server_others.go:152] "Using iptables Proxier"
	I1103 20:48:05.426097       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1103 20:48:05.426109       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1103 20:48:05.426145       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1103 20:48:05.426406       1 server.go:846] "Version info" version="v1.28.3"
	I1103 20:48:05.426423       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1103 20:48:05.427345       1 config.go:188] "Starting service config controller"
	I1103 20:48:05.427373       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1103 20:48:05.427398       1 config.go:97] "Starting endpoint slice config controller"
	I1103 20:48:05.427402       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1103 20:48:05.427998       1 config.go:315] "Starting node config controller"
	I1103 20:48:05.428006       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1103 20:48:05.527987       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1103 20:48:05.528012       1 shared_informer.go:318] Caches are synced for service config
	I1103 20:48:05.528065       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6dadb4222dcb187de2e89f7937c250f7f183ad17f71c5033b25c1d459aec009b] <==
	* W1103 20:47:49.390034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1103 20:47:49.390040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1103 20:47:49.390062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1103 20:47:49.390075       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1103 20:47:49.389880       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1103 20:47:49.390091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1103 20:47:49.389787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1103 20:47:49.390107       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1103 20:47:49.390136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1103 20:47:49.390136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1103 20:47:49.389759       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1103 20:47:49.390162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1103 20:47:49.389856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1103 20:47:49.390183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1103 20:47:49.389885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1103 20:47:49.390207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1103 20:47:49.389897       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1103 20:47:49.390223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1103 20:47:49.390038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1103 20:47:49.390235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1103 20:47:50.270328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1103 20:47:50.270356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1103 20:47:50.526855       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1103 20:47:50.526882       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1103 20:47:53.315587       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 03 20:48:04 multinode-280480 kubelet[1583]: I1103 20:48:04.799313    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09340714-82ee-4eb4-9884-b262fa594650-lib-modules\") pod \"kube-proxy-lsfmj\" (UID: \"09340714-82ee-4eb4-9884-b262fa594650\") " pod="kube-system/kube-proxy-lsfmj"
	Nov 03 20:48:04 multinode-280480 kubelet[1583]: I1103 20:48:04.799345    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmthv\" (UniqueName: \"kubernetes.io/projected/09340714-82ee-4eb4-9884-b262fa594650-kube-api-access-gmthv\") pod \"kube-proxy-lsfmj\" (UID: \"09340714-82ee-4eb4-9884-b262fa594650\") " pod="kube-system/kube-proxy-lsfmj"
	Nov 03 20:48:04 multinode-280480 kubelet[1583]: I1103 20:48:04.799433    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/275c32e9-1923-43d6-8f29-fb7afd49891f-cni-cfg\") pod \"kindnet-4khv5\" (UID: \"275c32e9-1923-43d6-8f29-fb7afd49891f\") " pod="kube-system/kindnet-4khv5"
	Nov 03 20:48:04 multinode-280480 kubelet[1583]: I1103 20:48:04.799507    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09340714-82ee-4eb4-9884-b262fa594650-xtables-lock\") pod \"kube-proxy-lsfmj\" (UID: \"09340714-82ee-4eb4-9884-b262fa594650\") " pod="kube-system/kube-proxy-lsfmj"
	Nov 03 20:48:04 multinode-280480 kubelet[1583]: I1103 20:48:04.893369    1583 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 03 20:48:04 multinode-280480 kubelet[1583]: I1103 20:48:04.894368    1583 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 03 20:48:05 multinode-280480 kubelet[1583]: W1103 20:48:05.058292    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/crio-f0729de71f4d4a9c099f2f2d5687f3eaa3d92ed922263e0e6b5f674e5f95558a WatchSource:0}: Error finding container f0729de71f4d4a9c099f2f2d5687f3eaa3d92ed922263e0e6b5f674e5f95558a: Status 404 returned error can't find the container with id f0729de71f4d4a9c099f2f2d5687f3eaa3d92ed922263e0e6b5f674e5f95558a
	Nov 03 20:48:05 multinode-280480 kubelet[1583]: W1103 20:48:05.058565    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/crio-5fe63db6cdb4dd871991a015e4ea330b232f6eebac42f73a6ffc34225ef71f48 WatchSource:0}: Error finding container 5fe63db6cdb4dd871991a015e4ea330b232f6eebac42f73a6ffc34225ef71f48: Status 404 returned error can't find the container with id 5fe63db6cdb4dd871991a015e4ea330b232f6eebac42f73a6ffc34225ef71f48
	Nov 03 20:48:06 multinode-280480 kubelet[1583]: I1103 20:48:06.127843    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lsfmj" podStartSLOduration=2.127805031 podCreationTimestamp="2023-11-03 20:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-03 20:48:06.127427449 +0000 UTC m=+14.192437803" watchObservedRunningTime="2023-11-03 20:48:06.127805031 +0000 UTC m=+14.192815174"
	Nov 03 20:48:35 multinode-280480 kubelet[1583]: I1103 20:48:35.946027    1583 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 03 20:48:35 multinode-280480 kubelet[1583]: I1103 20:48:35.964866    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4khv5" podStartSLOduration=31.964794775 podCreationTimestamp="2023-11-03 20:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-03 20:48:06.136234269 +0000 UTC m=+14.201244413" watchObservedRunningTime="2023-11-03 20:48:35.964794775 +0000 UTC m=+44.029804916"
	Nov 03 20:48:35 multinode-280480 kubelet[1583]: I1103 20:48:35.965363    1583 topology_manager.go:215] "Topology Admit Handler" podUID="c6417a12-b154-42c3-ac95-a45396156b0e" podNamespace="kube-system" podName="coredns-5dd5756b68-rxqxb"
	Nov 03 20:48:35 multinode-280480 kubelet[1583]: I1103 20:48:35.966884    1583 topology_manager.go:215] "Topology Admit Handler" podUID="1874c901-a5b0-41a8-922c-94cb29090e3e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 03 20:48:36 multinode-280480 kubelet[1583]: I1103 20:48:36.017075    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6417a12-b154-42c3-ac95-a45396156b0e-config-volume\") pod \"coredns-5dd5756b68-rxqxb\" (UID: \"c6417a12-b154-42c3-ac95-a45396156b0e\") " pod="kube-system/coredns-5dd5756b68-rxqxb"
	Nov 03 20:48:36 multinode-280480 kubelet[1583]: I1103 20:48:36.017127    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97kxp\" (UniqueName: \"kubernetes.io/projected/c6417a12-b154-42c3-ac95-a45396156b0e-kube-api-access-97kxp\") pod \"coredns-5dd5756b68-rxqxb\" (UID: \"c6417a12-b154-42c3-ac95-a45396156b0e\") " pod="kube-system/coredns-5dd5756b68-rxqxb"
	Nov 03 20:48:36 multinode-280480 kubelet[1583]: I1103 20:48:36.017163    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1874c901-a5b0-41a8-922c-94cb29090e3e-tmp\") pod \"storage-provisioner\" (UID: \"1874c901-a5b0-41a8-922c-94cb29090e3e\") " pod="kube-system/storage-provisioner"
	Nov 03 20:48:36 multinode-280480 kubelet[1583]: I1103 20:48:36.017232    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lcd2\" (UniqueName: \"kubernetes.io/projected/1874c901-a5b0-41a8-922c-94cb29090e3e-kube-api-access-8lcd2\") pod \"storage-provisioner\" (UID: \"1874c901-a5b0-41a8-922c-94cb29090e3e\") " pod="kube-system/storage-provisioner"
	Nov 03 20:48:36 multinode-280480 kubelet[1583]: W1103 20:48:36.309195    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/crio-1d566902a547bd6f49a1734a88e5c743cb9da7ced2b58ce95811c537a2dcb4f9 WatchSource:0}: Error finding container 1d566902a547bd6f49a1734a88e5c743cb9da7ced2b58ce95811c537a2dcb4f9: Status 404 returned error can't find the container with id 1d566902a547bd6f49a1734a88e5c743cb9da7ced2b58ce95811c537a2dcb4f9
	Nov 03 20:48:36 multinode-280480 kubelet[1583]: W1103 20:48:36.309496    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/crio-f28612dc54f1368e0d4eafe8610ecb98d892c90fc88d428ae120a74fc0805194 WatchSource:0}: Error finding container f28612dc54f1368e0d4eafe8610ecb98d892c90fc88d428ae120a74fc0805194: Status 404 returned error can't find the container with id f28612dc54f1368e0d4eafe8610ecb98d892c90fc88d428ae120a74fc0805194
	Nov 03 20:48:37 multinode-280480 kubelet[1583]: I1103 20:48:37.184498    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rxqxb" podStartSLOduration=32.184453492 podCreationTimestamp="2023-11-03 20:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-03 20:48:37.184369934 +0000 UTC m=+45.249380077" watchObservedRunningTime="2023-11-03 20:48:37.184453492 +0000 UTC m=+45.249463640"
	Nov 03 20:48:37 multinode-280480 kubelet[1583]: I1103 20:48:37.184602    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.18457941 podCreationTimestamp="2023-11-03 20:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-03 20:48:37.175951482 +0000 UTC m=+45.240961638" watchObservedRunningTime="2023-11-03 20:48:37.18457941 +0000 UTC m=+45.249589556"
	Nov 03 20:48:57 multinode-280480 kubelet[1583]: I1103 20:48:57.385294    1583 topology_manager.go:215] "Topology Admit Handler" podUID="700b8a6c-39f9-464c-84e3-b5a59b4e9900" podNamespace="default" podName="busybox-5bc68d56bd-z5cz8"
	Nov 03 20:48:57 multinode-280480 kubelet[1583]: I1103 20:48:57.444802    1583 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r7jd\" (UniqueName: \"kubernetes.io/projected/700b8a6c-39f9-464c-84e3-b5a59b4e9900-kube-api-access-7r7jd\") pod \"busybox-5bc68d56bd-z5cz8\" (UID: \"700b8a6c-39f9-464c-84e3-b5a59b4e9900\") " pod="default/busybox-5bc68d56bd-z5cz8"
	Nov 03 20:48:57 multinode-280480 kubelet[1583]: W1103 20:48:57.737285    1583 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/crio-2cd785fc0954b6c4748da9bec0fa34ee7f3a150a482a71c4435b528f6753fe4b WatchSource:0}: Error finding container 2cd785fc0954b6c4748da9bec0fa34ee7f3a150a482a71c4435b528f6753fe4b: Status 404 returned error can't find the container with id 2cd785fc0954b6c4748da9bec0fa34ee7f3a150a482a71c4435b528f6753fe4b
	Nov 03 20:48:59 multinode-280480 kubelet[1583]: I1103 20:48:59.215158    1583 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-z5cz8" podStartSLOduration=1.623542501 podCreationTimestamp="2023-11-03 20:48:57 +0000 UTC" firstStartedPulling="2023-11-03 20:48:57.740977717 +0000 UTC m=+65.805987852" lastFinishedPulling="2023-11-03 20:48:58.332545123 +0000 UTC m=+66.397555256" observedRunningTime="2023-11-03 20:48:59.21471238 +0000 UTC m=+67.279722522" watchObservedRunningTime="2023-11-03 20:48:59.215109905 +0000 UTC m=+67.280120052"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-280480 -n multinode-280480
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-280480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.885354891.exe start -p running-upgrade-541401 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.885354891.exe start -p running-upgrade-541401 --memory=2200 --vm-driver=docker  --container-runtime=crio: (56.220300994s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-541401 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-541401 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.165929173s)

                                                
                                                
-- stdout --
	* [running-upgrade-541401] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17545
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-541401 in cluster running-upgrade-541401
	* Pulling base image ...
	* Updating the running docker "running-upgrade-541401" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1103 21:00:40.922917  184874 out.go:296] Setting OutFile to fd 1 ...
	I1103 21:00:40.923166  184874 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 21:00:40.923207  184874 out.go:309] Setting ErrFile to fd 2...
	I1103 21:00:40.923226  184874 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 21:00:40.923575  184874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 21:00:40.924341  184874 out.go:303] Setting JSON to false
	I1103 21:00:40.925695  184874 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2591,"bootTime":1699042650,"procs":455,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 21:00:40.925778  184874 start.go:138] virtualization: kvm guest
	I1103 21:00:40.928316  184874 out.go:177] * [running-upgrade-541401] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 21:00:40.930394  184874 out.go:177]   - MINIKUBE_LOCATION=17545
	I1103 21:00:40.932069  184874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 21:00:40.930471  184874 notify.go:220] Checking for updates...
	I1103 21:00:40.935021  184874 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 21:00:40.936496  184874 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 21:00:40.937889  184874 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1103 21:00:40.939330  184874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1103 21:00:40.941074  184874 config.go:182] Loaded profile config "running-upgrade-541401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1103 21:00:40.941098  184874 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89
	I1103 21:00:40.943033  184874 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1103 21:00:40.944392  184874 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 21:00:40.971652  184874 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 21:00:40.971742  184874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 21:00:41.030278  184874 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:80 SystemTime:2023-11-03 21:00:41.02091435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 21:00:41.030445  184874 docker.go:295] overlay module found
	I1103 21:00:41.032726  184874 out.go:177] * Using the docker driver based on existing profile
	I1103 21:00:41.034603  184874 start.go:298] selected driver: docker
	I1103 21:00:41.034616  184874 start.go:902] validating driver "docker" against &{Name:running-upgrade-541401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-541401 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1103 21:00:41.034709  184874 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1103 21:00:41.035577  184874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 21:00:41.085542  184874 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:80 SystemTime:2023-11-03 21:00:41.077548246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 21:00:41.085875  184874 cni.go:84] Creating CNI manager for ""
	I1103 21:00:41.085904  184874 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1103 21:00:41.085920  184874 start_flags.go:323] config:
	{Name:running-upgrade-541401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-541401 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1103 21:00:41.088100  184874 out.go:177] * Starting control plane node running-upgrade-541401 in cluster running-upgrade-541401
	I1103 21:00:41.089499  184874 cache.go:121] Beginning downloading kic base image for docker with crio
	I1103 21:00:41.090988  184874 out.go:177] * Pulling base image ...
	I1103 21:00:41.092491  184874 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1103 21:00:41.092586  184874 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon
	I1103 21:00:41.108213  184874 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon, skipping pull
	I1103 21:00:41.108239  184874 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 exists in daemon, skipping load
	W1103 21:00:41.126569  184874 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1103 21:00:41.126710  184874 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/running-upgrade-541401/config.json ...
	I1103 21:00:41.126794  184874 cache.go:107] acquiring lock: {Name:mk73a26abc65b338f9fca6ccee09ab6c3db8eb69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.126796  184874 cache.go:107] acquiring lock: {Name:mk67a29ab3127d32aab942bb5f77c1cad94ad4ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.126865  184874 cache.go:107] acquiring lock: {Name:mk05e8afc476f21b75f4dcc4e03bcd91934021b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.126921  184874 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1103 21:00:41.126936  184874 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1103 21:00:41.126938  184874 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 152.294µs
	I1103 21:00:41.126951  184874 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 161.25µs
	I1103 21:00:41.126966  184874 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1103 21:00:41.126804  184874 cache.go:107] acquiring lock: {Name:mkd1abaf61e4b796b3cd3c0c019e209859ba5dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.126971  184874 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1103 21:00:41.126996  184874 cache.go:194] Successfully downloaded all kic artifacts
	I1103 21:00:41.126983  184874 cache.go:107] acquiring lock: {Name:mk722662d6cc7985b68bf535564390d868a5cb12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.127009  184874 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1103 21:00:41.127015  184874 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1103 21:00:41.127021  184874 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 157.526µs
	I1103 21:00:41.127024  184874 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 224.352µs
	I1103 21:00:41.127023  184874 start.go:365] acquiring machines lock for running-upgrade-541401: {Name:mkd629d3f889eeed88ed73306c2676e96a59f507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.127031  184874 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1103 21:00:41.127033  184874 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1103 21:00:41.127019  184874 cache.go:107] acquiring lock: {Name:mk0656e5d10ad2a2ff852b5a6bfb1813b21068a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.127071  184874 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1103 21:00:41.127081  184874 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 147.994µs
	I1103 21:00:41.127021  184874 cache.go:107] acquiring lock: {Name:mkeda5447808b388c79d63621b3d49ece39dc20e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.127104  184874 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1103 21:00:41.127112  184874 start.go:369] acquired machines lock for "running-upgrade-541401" in 74.25µs
	I1103 21:00:41.127127  184874 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1103 21:00:41.127134  184874 start.go:96] Skipping create...Using existing machine configuration
	I1103 21:00:41.127148  184874 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 163.157µs
	I1103 21:00:41.127159  184874 fix.go:54] fixHost starting: m01
	I1103 21:00:41.127165  184874 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1103 21:00:41.127201  184874 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1103 21:00:41.127219  184874 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 236.204µs
	I1103 21:00:41.127240  184874 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1103 21:00:41.127260  184874 cache.go:107] acquiring lock: {Name:mk03f11442ecb566999f9f5d7f999672931ccfaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 21:00:41.127366  184874 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1103 21:00:41.127382  184874 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 162.021µs
	I1103 21:00:41.127398  184874 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1103 21:00:41.127409  184874 cache.go:87] Successfully saved all images to host disk.
	I1103 21:00:41.127453  184874 cli_runner.go:164] Run: docker container inspect running-upgrade-541401 --format={{.State.Status}}
	I1103 21:00:41.146352  184874 fix.go:102] recreateIfNeeded on running-upgrade-541401: state=Running err=<nil>
	W1103 21:00:41.146387  184874 fix.go:128] unexpected machine state, will restart: <nil>
	I1103 21:00:41.148724  184874 out.go:177] * Updating the running docker "running-upgrade-541401" container ...
	I1103 21:00:41.150244  184874 machine.go:88] provisioning docker machine ...
	I1103 21:00:41.150312  184874 ubuntu.go:169] provisioning hostname "running-upgrade-541401"
	I1103 21:00:41.150388  184874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-541401
	I1103 21:00:41.168593  184874 main.go:141] libmachine: Using SSH client type: native
	I1103 21:00:41.168985  184874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32961 <nil> <nil>}
	I1103 21:00:41.169011  184874 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-541401 && echo "running-upgrade-541401" | sudo tee /etc/hostname
	I1103 21:00:41.284209  184874 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-541401
	
	I1103 21:00:41.284307  184874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-541401
	I1103 21:00:41.300776  184874 main.go:141] libmachine: Using SSH client type: native
	I1103 21:00:41.301337  184874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32961 <nil> <nil>}
	I1103 21:00:41.301370  184874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-541401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-541401/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-541401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1103 21:00:41.404243  184874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1103 21:00:41.404272  184874 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17545-5130/.minikube CaCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17545-5130/.minikube}
	I1103 21:00:41.404314  184874 ubuntu.go:177] setting up certificates
	I1103 21:00:41.404324  184874 provision.go:83] configureAuth start
	I1103 21:00:41.404372  184874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-541401
	I1103 21:00:41.420893  184874 provision.go:138] copyHostCerts
	I1103 21:00:41.420958  184874 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem, removing ...
	I1103 21:00:41.420965  184874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem
	I1103 21:00:41.421035  184874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem (1082 bytes)
	I1103 21:00:41.421126  184874 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem, removing ...
	I1103 21:00:41.421136  184874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem
	I1103 21:00:41.421161  184874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem (1123 bytes)
	I1103 21:00:41.421211  184874 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem, removing ...
	I1103 21:00:41.421223  184874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem
	I1103 21:00:41.421251  184874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem (1679 bytes)
	I1103 21:00:41.421296  184874 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-541401 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-541401]
	I1103 21:00:41.580439  184874 provision.go:172] copyRemoteCerts
	I1103 21:00:41.580507  184874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1103 21:00:41.580557  184874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-541401
	I1103 21:00:41.597930  184874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32961 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/running-upgrade-541401/id_rsa Username:docker}
	I1103 21:00:41.679400  184874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1103 21:00:41.695515  184874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1103 21:00:41.713130  184874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1103 21:00:41.728664  184874 provision.go:86] duration metric: configureAuth took 324.328478ms
	I1103 21:00:41.728692  184874 ubuntu.go:193] setting minikube options for container-runtime
	I1103 21:00:41.728889  184874 config.go:182] Loaded profile config "running-upgrade-541401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1103 21:00:41.728999  184874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-541401
	I1103 21:00:41.748224  184874 main.go:141] libmachine: Using SSH client type: native
	I1103 21:00:41.748579  184874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32961 <nil> <nil>}
	I1103 21:00:41.748603  184874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1103 21:00:42.149716  184874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1103 21:00:42.149745  184874 machine.go:91] provisioned docker machine in 999.482017ms
	I1103 21:00:42.149758  184874 start.go:300] post-start starting for "running-upgrade-541401" (driver="docker")
	I1103 21:00:42.149772  184874 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1103 21:00:42.149843  184874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1103 21:00:42.149892  184874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-541401
	I1103 21:00:42.167887  184874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32961 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/running-upgrade-541401/id_rsa Username:docker}
	I1103 21:00:42.247505  184874 ssh_runner.go:195] Run: cat /etc/os-release
	I1103 21:00:42.250432  184874 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1103 21:00:42.250466  184874 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1103 21:00:42.250485  184874 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1103 21:00:42.250498  184874 info.go:137] Remote host: Ubuntu 19.10
	I1103 21:00:42.250511  184874 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/addons for local assets ...
	I1103 21:00:42.250572  184874 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/files for local assets ...
	I1103 21:00:42.250661  184874 filesync.go:149] local asset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> 118872.pem in /etc/ssl/certs
	I1103 21:00:42.250776  184874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1103 21:00:42.257266  184874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem --> /etc/ssl/certs/118872.pem (1708 bytes)
	I1103 21:00:42.273818  184874 start.go:303] post-start completed in 124.046964ms
	I1103 21:00:42.273889  184874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1103 21:00:42.273938  184874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-541401
	I1103 21:00:42.291207  184874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32961 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/running-upgrade-541401/id_rsa Username:docker}
	I1103 21:00:42.372914  184874 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1103 21:00:42.376834  184874 fix.go:56] fixHost completed within 1.249671014s
	I1103 21:00:42.376861  184874 start.go:83] releasing machines lock for "running-upgrade-541401", held for 1.249733489s
	I1103 21:00:42.376927  184874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-541401
	I1103 21:00:42.397488  184874 ssh_runner.go:195] Run: cat /version.json
	I1103 21:00:42.397545  184874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-541401
	I1103 21:00:42.397576  184874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1103 21:00:42.397640  184874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-541401
	I1103 21:00:42.420145  184874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32961 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/running-upgrade-541401/id_rsa Username:docker}
	I1103 21:00:42.422161  184874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32961 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/running-upgrade-541401/id_rsa Username:docker}
	W1103 21:00:42.535748  184874 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1103 21:00:42.535828  184874 ssh_runner.go:195] Run: systemctl --version
	I1103 21:00:42.540207  184874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1103 21:00:42.595465  184874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1103 21:00:42.599920  184874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 21:00:42.615141  184874 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1103 21:00:42.615214  184874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 21:00:42.638005  184874 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1103 21:00:42.638029  184874 start.go:472] detecting cgroup driver to use...
	I1103 21:00:42.638063  184874 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1103 21:00:42.638107  184874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1103 21:00:42.660600  184874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1103 21:00:42.669394  184874 docker.go:203] disabling cri-docker service (if available) ...
	I1103 21:00:42.669446  184874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1103 21:00:42.678447  184874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1103 21:00:42.687921  184874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1103 21:00:42.696912  184874 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1103 21:00:42.696960  184874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1103 21:00:42.776614  184874 docker.go:219] disabling docker service ...
	I1103 21:00:42.776681  184874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1103 21:00:42.786795  184874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1103 21:00:42.797162  184874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1103 21:00:42.892539  184874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1103 21:00:42.989652  184874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1103 21:00:42.999845  184874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1103 21:00:43.013332  184874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1103 21:00:43.013407  184874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 21:00:43.023945  184874 out.go:177] 
	W1103 21:00:43.025437  184874 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1103 21:00:43.025460  184874 out.go:239] * 
	* 
	W1103 21:00:43.026666  184874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1103 21:00:43.028340  184874 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-541401 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-03 21:00:43.045809754 +0000 UTC m=+1904.519461836
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-541401
helpers_test.go:235: (dbg) docker inspect running-upgrade-541401:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e42922e8048867b05c5dd314e4b8ac973f8abc831cd83e2e15ec1131936fdd",
	        "Created": "2023-11-03T20:59:45.039342932Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171982,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-03T20:59:45.479605228Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/86e42922e8048867b05c5dd314e4b8ac973f8abc831cd83e2e15ec1131936fdd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e42922e8048867b05c5dd314e4b8ac973f8abc831cd83e2e15ec1131936fdd/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e42922e8048867b05c5dd314e4b8ac973f8abc831cd83e2e15ec1131936fdd/hosts",
	        "LogPath": "/var/lib/docker/containers/86e42922e8048867b05c5dd314e4b8ac973f8abc831cd83e2e15ec1131936fdd/86e42922e8048867b05c5dd314e4b8ac973f8abc831cd83e2e15ec1131936fdd-json.log",
	        "Name": "/running-upgrade-541401",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-541401:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d0ee9250ef9394333ba329a8ba1857d722de6860944f09583e1092ef6b10d64b-init/diff:/var/lib/docker/overlay2/ec480a5a35faeee42714eb2008838e8fc45fb54005a5c234b84e6593ef877864/diff:/var/lib/docker/overlay2/451cdf317acddea0c5fa5243af795fd2c18844cf21dd4f1a976eeb29ae22a2d2/diff:/var/lib/docker/overlay2/3d4c827a0b51ce2c86804478ccae7ab3c348a286fee34d795b9810ec2e84054c/diff:/var/lib/docker/overlay2/8d565eea78aa2a994150bc70e09345b4500043a05cb5d5e440855ff5a0937ee9/diff:/var/lib/docker/overlay2/d06f1e1e4f1806fb967ebe0e51491a6f690e90a742170f9bcb52ca84cbe6211f/diff:/var/lib/docker/overlay2/6dcc54bd3e5068e3c1d99fa4868460d27c5021756f2512f71ad68d442702d5de/diff:/var/lib/docker/overlay2/da2bf029d17a131da2edebb510bffb9ff56b471bccacdde85341120dfbaa6f97/diff:/var/lib/docker/overlay2/3721a1549996ead1636069bbc0f891bf82490b584771c4af765a8d87c197f87a/diff:/var/lib/docker/overlay2/83a9fe59329b0f862252f129a63c13315ecf7b1a322662b4029b6ee239506c79/diff:/var/lib/docker/overlay2/999a3c
212622fb27727b7908bd521fb975b6294b100490d7d3a73a77612d59cd/diff:/var/lib/docker/overlay2/e84aade297aaefefd8aa0e41ff472853fc965f2b61a3dc680705a1becd66497a/diff:/var/lib/docker/overlay2/64dbbcff81c3abac477c149a07897f33f29659b9aac9aec278ff15afaaa83433/diff:/var/lib/docker/overlay2/009f724bd6cdfb87c1eb48933a3ecf67e13e1ace63b845c2e44599e86587b0cc/diff:/var/lib/docker/overlay2/9072b8824ffbd996ed980097007dbcd432d4b39f7f3c098035b27402b1c0a4b6/diff:/var/lib/docker/overlay2/ce408b08da4c5d93e9d697bdad2e9253d66703b68afddd8da5336b7bfc02753a/diff:/var/lib/docker/overlay2/b74c363f64dba645db495707240876f663b7bc11e4f708db68f78f4df8a7e23f/diff:/var/lib/docker/overlay2/78d19213e62249d6639b92975a0ac4200e87f4ea3c9f1e54aa48c347ca095bef/diff:/var/lib/docker/overlay2/529b8f2d9109b548897e5d86a842e6dd802d70f218b0adffe70bc19e8591bac5/diff:/var/lib/docker/overlay2/50aaf2e138023caea314d900b178552dcaac2f0f59ba91862645585dc13ac4a3/diff:/var/lib/docker/overlay2/0ae3a23988a9e964c14989a87e0eba57262464e2761caf99bdf1918061e7a19c/diff:/var/lib/d
ocker/overlay2/c1e9c8a131ff253ed730a53b3a99620163fb781c28d2300c6ce75cb7d10d15f5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d0ee9250ef9394333ba329a8ba1857d722de6860944f09583e1092ef6b10d64b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d0ee9250ef9394333ba329a8ba1857d722de6860944f09583e1092ef6b10d64b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d0ee9250ef9394333ba329a8ba1857d722de6860944f09583e1092ef6b10d64b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-541401",
	                "Source": "/var/lib/docker/volumes/running-upgrade-541401/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-541401",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-541401",
	                "name.minikube.sigs.k8s.io": "running-upgrade-541401",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3185a5bcf8c330fde831437d74942ae612732810c74c90f5d038c3fac3de2ece",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32961"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3185a5bcf8c3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "b0a4555f9eb90514a6771f5b4d5afbbe44fc78278d4a428253442d0b8193d8c1",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "0612366d079b7f245578a54a4fd564ba303473e0a4458df201abbc7ae4220f8c",
	                    "EndpointID": "b0a4555f9eb90514a6771f5b4d5afbbe44fc78278d4a428253442d0b8193d8c1",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-541401 -n running-upgrade-541401
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-541401 -n running-upgrade-541401: exit status 4 (316.187211ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1103 21:00:43.351141  185499 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-541401" does not appear in /home/jenkins/minikube-integration/17545-5130/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-541401" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-541401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-541401
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-541401: (2.909115501s)
--- FAIL: TestRunningBinaryUpgrade (62.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.855739261.exe start -p stopped-upgrade-519866 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.855739261.exe start -p stopped-upgrade-519866 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m17.873990765s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.855739261.exe -p stopped-upgrade-519866 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.855739261.exe -p stopped-upgrade-519866 stop: (1.475687758s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-519866 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-519866 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (11.951440508s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-519866] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17545
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-519866 in cluster stopped-upgrade-519866
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-519866" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1103 20:59:27.671749  166375 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:59:27.671870  166375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:59:27.671881  166375 out.go:309] Setting ErrFile to fd 2...
	I1103 20:59:27.671888  166375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:59:27.672085  166375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 20:59:27.672629  166375 out.go:303] Setting JSON to false
	I1103 20:59:27.673961  166375 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2518,"bootTime":1699042650,"procs":602,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 20:59:27.674031  166375 start.go:138] virtualization: kvm guest
	I1103 20:59:27.676391  166375 out.go:177] * [stopped-upgrade-519866] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 20:59:27.678889  166375 out.go:177]   - MINIKUBE_LOCATION=17545
	I1103 20:59:27.680588  166375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 20:59:27.678998  166375 notify.go:220] Checking for updates...
	I1103 20:59:27.684481  166375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:59:27.685754  166375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 20:59:27.687288  166375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1103 20:59:27.688920  166375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1103 20:59:27.691144  166375 config.go:182] Loaded profile config "stopped-upgrade-519866": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1103 20:59:27.691167  166375 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89
	I1103 20:59:27.693109  166375 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1103 20:59:27.695168  166375 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 20:59:27.720608  166375 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 20:59:27.720696  166375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:59:27.820663  166375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:82 SystemTime:2023-11-03 20:59:27.810580946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:59:27.820800  166375 docker.go:295] overlay module found
	I1103 20:59:27.822670  166375 out.go:177] * Using the docker driver based on existing profile
	I1103 20:59:27.823997  166375 start.go:298] selected driver: docker
	I1103 20:59:27.824010  166375 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-519866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-519866 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1103 20:59:27.824093  166375 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1103 20:59:27.825091  166375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:59:27.895731  166375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:82 SystemTime:2023-11-03 20:59:27.8867847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:59:27.896135  166375 cni.go:84] Creating CNI manager for ""
	I1103 20:59:27.896168  166375 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1103 20:59:27.896184  166375 start_flags.go:323] config:
	{Name:stopped-upgrade-519866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-519866 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1103 20:59:27.898031  166375 out.go:177] * Starting control plane node stopped-upgrade-519866 in cluster stopped-upgrade-519866
	I1103 20:59:27.899344  166375 cache.go:121] Beginning downloading kic base image for docker with crio
	I1103 20:59:27.900817  166375 out.go:177] * Pulling base image ...
	I1103 20:59:27.902127  166375 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1103 20:59:27.902220  166375 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon
	I1103 20:59:27.920159  166375 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon, skipping pull
	I1103 20:59:27.920198  166375 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 exists in daemon, skipping load
	W1103 20:59:27.933483  166375 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1103 20:59:27.933721  166375 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/stopped-upgrade-519866/config.json ...
	I1103 20:59:27.933763  166375 cache.go:107] acquiring lock: {Name:mk73a26abc65b338f9fca6ccee09ab6c3db8eb69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.933796  166375 cache.go:107] acquiring lock: {Name:mk03f11442ecb566999f9f5d7f999672931ccfaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.933836  166375 cache.go:107] acquiring lock: {Name:mk0656e5d10ad2a2ff852b5a6bfb1813b21068a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.933866  166375 cache.go:107] acquiring lock: {Name:mkd1abaf61e4b796b3cd3c0c019e209859ba5dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.933959  166375 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I1103 20:59:27.933980  166375 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I1103 20:59:27.933993  166375 cache.go:194] Successfully downloaded all kic artifacts
	I1103 20:59:27.934009  166375 cache.go:107] acquiring lock: {Name:mk722662d6cc7985b68bf535564390d868a5cb12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.933766  166375 cache.go:107] acquiring lock: {Name:mk05e8afc476f21b75f4dcc4e03bcd91934021b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.934097  166375 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1103 20:59:27.934126  166375 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1103 20:59:27.934146  166375 cache.go:107] acquiring lock: {Name:mk67a29ab3127d32aab942bb5f77c1cad94ad4ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.933853  166375 cache.go:115] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1103 20:59:27.933806  166375 cache.go:107] acquiring lock: {Name:mkeda5447808b388c79d63621b3d49ece39dc20e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.933997  166375 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1103 20:59:27.934244  166375 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I1103 20:59:27.934285  166375 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1103 20:59:27.934201  166375 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 451.184µs
	I1103 20:59:27.934331  166375 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1103 20:59:27.934024  166375 start.go:365] acquiring machines lock for stopped-upgrade-519866: {Name:mka956e281be1da678f02f0096e92744bc344afa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1103 20:59:27.934434  166375 start.go:369] acquired machines lock for "stopped-upgrade-519866" in 42.024µs
	I1103 20:59:27.934456  166375 start.go:96] Skipping create...Using existing machine configuration
	I1103 20:59:27.934467  166375 fix.go:54] fixHost starting: m01
	I1103 20:59:27.934743  166375 cli_runner.go:164] Run: docker container inspect stopped-upgrade-519866 --format={{.State.Status}}
	I1103 20:59:27.935571  166375 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I1103 20:59:27.935580  166375 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I1103 20:59:27.935595  166375 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1103 20:59:27.935585  166375 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1103 20:59:27.935565  166375 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1103 20:59:27.935650  166375 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I1103 20:59:27.935675  166375 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1103 20:59:27.955809  166375 fix.go:102] recreateIfNeeded on stopped-upgrade-519866: state=Stopped err=<nil>
	W1103 20:59:27.955830  166375 fix.go:128] unexpected machine state, will restart: <nil>
	I1103 20:59:27.958043  166375 out.go:177] * Restarting existing docker container for "stopped-upgrade-519866" ...
	I1103 20:59:27.962952  166375 cli_runner.go:164] Run: docker start stopped-upgrade-519866
	I1103 20:59:28.166127  166375 cache.go:162] opening:  /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1103 20:59:28.177210  166375 cache.go:162] opening:  /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1103 20:59:28.179826  166375 cache.go:162] opening:  /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I1103 20:59:28.192080  166375 cache.go:162] opening:  /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I1103 20:59:28.206362  166375 cache.go:162] opening:  /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I1103 20:59:28.232862  166375 cache.go:162] opening:  /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1103 20:59:28.245115  166375 cli_runner.go:164] Run: docker container inspect stopped-upgrade-519866 --format={{.State.Status}}
	I1103 20:59:28.246193  166375 cache.go:162] opening:  /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I1103 20:59:28.255425  166375 cache.go:157] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1103 20:59:28.255453  166375 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 321.647992ms
	I1103 20:59:28.255468  166375 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1103 20:59:28.273437  166375 kic.go:430] container "stopped-upgrade-519866" state is running.
	I1103 20:59:28.273903  166375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-519866
	I1103 20:59:28.297825  166375 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/stopped-upgrade-519866/config.json ...
	I1103 20:59:28.298049  166375 machine.go:88] provisioning docker machine ...
	I1103 20:59:28.298067  166375 ubuntu.go:169] provisioning hostname "stopped-upgrade-519866"
	I1103 20:59:28.298111  166375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519866
	I1103 20:59:28.318944  166375 main.go:141] libmachine: Using SSH client type: native
	I1103 20:59:28.320457  166375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1103 20:59:28.320521  166375 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-519866 && echo "stopped-upgrade-519866" | sudo tee /etc/hostname
	I1103 20:59:28.321152  166375 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55514->127.0.0.1:32953: read: connection reset by peer
	I1103 20:59:28.755829  166375 cache.go:157] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1103 20:59:28.755865  166375 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 821.856948ms
	I1103 20:59:28.755876  166375 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1103 20:59:29.173780  166375 cache.go:157] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1103 20:59:29.173820  166375 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.239983563s
	I1103 20:59:29.173837  166375 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1103 20:59:29.393742  166375 cache.go:157] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1103 20:59:29.393778  166375 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.459988886s
	I1103 20:59:29.393796  166375 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1103 20:59:29.417063  166375 cache.go:157] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1103 20:59:29.417093  166375 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.483336478s
	I1103 20:59:29.417110  166375 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1103 20:59:29.799815  166375 cache.go:157] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1103 20:59:29.799849  166375 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.865705319s
	I1103 20:59:29.799865  166375 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1103 20:59:29.897131  166375 cache.go:157] /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1103 20:59:29.897154  166375 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.963290231s
	I1103 20:59:29.897164  166375 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1103 20:59:29.897183  166375 cache.go:87] Successfully saved all images to host disk.
	I1103 20:59:31.445579  166375 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-519866
	
	I1103 20:59:31.445670  166375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519866
	I1103 20:59:31.464200  166375 main.go:141] libmachine: Using SSH client type: native
	I1103 20:59:31.464623  166375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1103 20:59:31.464654  166375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-519866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-519866/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-519866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1103 20:59:31.576575  166375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1103 20:59:31.576607  166375 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17545-5130/.minikube CaCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17545-5130/.minikube}
	I1103 20:59:31.576648  166375 ubuntu.go:177] setting up certificates
	I1103 20:59:31.576668  166375 provision.go:83] configureAuth start
	I1103 20:59:31.576734  166375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-519866
	I1103 20:59:31.596611  166375 provision.go:138] copyHostCerts
	I1103 20:59:31.596674  166375 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem, removing ...
	I1103 20:59:31.596686  166375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem
	I1103 20:59:31.596773  166375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/ca.pem (1082 bytes)
	I1103 20:59:31.596893  166375 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem, removing ...
	I1103 20:59:31.596905  166375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem
	I1103 20:59:31.596940  166375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/cert.pem (1123 bytes)
	I1103 20:59:31.597075  166375 exec_runner.go:144] found /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem, removing ...
	I1103 20:59:31.597090  166375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem
	I1103 20:59:31.597124  166375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17545-5130/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17545-5130/.minikube/key.pem (1679 bytes)
	I1103 20:59:31.597193  166375 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-519866 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-519866]
	I1103 20:59:31.963121  166375 provision.go:172] copyRemoteCerts
	I1103 20:59:31.963200  166375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1103 20:59:31.963251  166375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519866
	I1103 20:59:31.989590  166375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/stopped-upgrade-519866/id_rsa Username:docker}
	I1103 20:59:32.079096  166375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1103 20:59:32.097994  166375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1103 20:59:32.114561  166375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1103 20:59:32.130518  166375 provision.go:86] duration metric: configureAuth took 553.834994ms
	I1103 20:59:32.130545  166375 ubuntu.go:193] setting minikube options for container-runtime
	I1103 20:59:32.130793  166375 config.go:182] Loaded profile config "stopped-upgrade-519866": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1103 20:59:32.130903  166375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519866
	I1103 20:59:32.153875  166375 main.go:141] libmachine: Using SSH client type: native
	I1103 20:59:32.154415  166375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1103 20:59:32.154440  166375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1103 20:59:38.510566  166375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1103 20:59:38.510588  166375 machine.go:91] provisioned docker machine in 10.212529664s
	I1103 20:59:38.510602  166375 start.go:300] post-start starting for "stopped-upgrade-519866" (driver="docker")
	I1103 20:59:38.510612  166375 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1103 20:59:38.510659  166375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1103 20:59:38.510697  166375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519866
	I1103 20:59:38.527548  166375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/stopped-upgrade-519866/id_rsa Username:docker}
	I1103 20:59:38.619228  166375 ssh_runner.go:195] Run: cat /etc/os-release
	I1103 20:59:38.621830  166375 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1103 20:59:38.621854  166375 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1103 20:59:38.621863  166375 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1103 20:59:38.621872  166375 info.go:137] Remote host: Ubuntu 19.10
	I1103 20:59:38.621881  166375 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/addons for local assets ...
	I1103 20:59:38.621935  166375 filesync.go:126] Scanning /home/jenkins/minikube-integration/17545-5130/.minikube/files for local assets ...
	I1103 20:59:38.622001  166375 filesync.go:149] local asset: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem -> 118872.pem in /etc/ssl/certs
	I1103 20:59:38.622077  166375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1103 20:59:38.628157  166375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/ssl/certs/118872.pem --> /etc/ssl/certs/118872.pem (1708 bytes)
	I1103 20:59:38.643689  166375 start.go:303] post-start completed in 133.074855ms
	I1103 20:59:38.643750  166375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1103 20:59:38.643793  166375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519866
	I1103 20:59:38.660501  166375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/stopped-upgrade-519866/id_rsa Username:docker}
	I1103 20:59:38.740699  166375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1103 20:59:38.744257  166375 fix.go:56] fixHost completed within 10.809783642s
	I1103 20:59:38.744284  166375 start.go:83] releasing machines lock for "stopped-upgrade-519866", held for 10.809834187s
	I1103 20:59:38.744359  166375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-519866
	I1103 20:59:38.761044  166375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1103 20:59:38.761074  166375 ssh_runner.go:195] Run: cat /version.json
	I1103 20:59:38.761122  166375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519866
	I1103 20:59:38.761132  166375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-519866
	I1103 20:59:38.779488  166375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/stopped-upgrade-519866/id_rsa Username:docker}
	I1103 20:59:38.781407  166375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/stopped-upgrade-519866/id_rsa Username:docker}
	W1103 20:59:38.855745  166375 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1103 20:59:38.855819  166375 ssh_runner.go:195] Run: systemctl --version
	I1103 20:59:38.927267  166375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1103 20:59:38.980590  166375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1103 20:59:38.986131  166375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:59:39.004717  166375 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1103 20:59:39.004802  166375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1103 20:59:39.029825  166375 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1103 20:59:39.029851  166375 start.go:472] detecting cgroup driver to use...
	I1103 20:59:39.029888  166375 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1103 20:59:39.029934  166375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1103 20:59:39.088802  166375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1103 20:59:39.102253  166375 docker.go:203] disabling cri-docker service (if available) ...
	I1103 20:59:39.102296  166375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1103 20:59:39.114738  166375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1103 20:59:39.129809  166375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1103 20:59:39.145489  166375 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1103 20:59:39.145550  166375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1103 20:59:39.228552  166375 docker.go:219] disabling docker service ...
	I1103 20:59:39.228631  166375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1103 20:59:39.246635  166375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1103 20:59:39.258160  166375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1103 20:59:39.376486  166375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1103 20:59:39.518086  166375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1103 20:59:39.529778  166375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1103 20:59:39.544935  166375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1103 20:59:39.544979  166375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1103 20:59:39.555754  166375 out.go:177] 
	W1103 20:59:39.557078  166375 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1103 20:59:39.557098  166375 out.go:239] * 
	* 
	W1103 20:59:39.557969  166375 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1103 20:59:39.560086  166375 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-519866 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (91.31s)

                                                
                                    

Test pass (278/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.21
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.3/json-events 5.75
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.19
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
18 TestDownloadOnlyKic 1.26
19 TestBinaryMirror 0.73
20 TestOffline 91.22
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 136.71
27 TestAddons/parallel/Registry 13.92
29 TestAddons/parallel/InspektorGadget 10.66
30 TestAddons/parallel/MetricsServer 5.77
31 TestAddons/parallel/HelmTiller 10.28
33 TestAddons/parallel/CSI 101.32
34 TestAddons/parallel/Headlamp 12.05
35 TestAddons/parallel/CloudSpanner 5.53
36 TestAddons/parallel/LocalPath 12.54
37 TestAddons/parallel/NvidiaDevicePlugin 5.45
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/StoppedEnableDisable 12.13
42 TestCertOptions 26.73
43 TestCertExpiration 230.89
45 TestForceSystemdFlag 29.46
46 TestForceSystemdEnv 42.4
48 TestKVMDriverInstallOrUpdate 2.86
52 TestErrorSpam/setup 21.18
53 TestErrorSpam/start 0.62
54 TestErrorSpam/status 0.86
55 TestErrorSpam/pause 1.46
56 TestErrorSpam/unpause 1.44
57 TestErrorSpam/stop 1.38
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 43.36
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 39.11
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
69 TestFunctional/serial/CacheCmd/cache/add_local 1.18
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
77 TestFunctional/serial/ExtraConfig 32.72
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 1.29
80 TestFunctional/serial/LogsFileCmd 1.31
81 TestFunctional/serial/InvalidService 4.62
83 TestFunctional/parallel/ConfigCmd 0.5
84 TestFunctional/parallel/DashboardCmd 10.29
85 TestFunctional/parallel/DryRun 0.51
86 TestFunctional/parallel/InternationalLanguage 0.19
87 TestFunctional/parallel/StatusCmd 1.32
91 TestFunctional/parallel/ServiceCmdConnect 10.69
92 TestFunctional/parallel/AddonsCmd 0.26
93 TestFunctional/parallel/PersistentVolumeClaim 30.1
95 TestFunctional/parallel/SSHCmd 0.7
96 TestFunctional/parallel/CpCmd 1.29
97 TestFunctional/parallel/MySQL 19.88
98 TestFunctional/parallel/FileSync 0.27
99 TestFunctional/parallel/CertSync 1.65
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
107 TestFunctional/parallel/License 0.22
108 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
110 TestFunctional/parallel/MountCmd/any-port 8.69
111 TestFunctional/parallel/ProfileCmd/profile_list 0.43
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
113 TestFunctional/parallel/MountCmd/specific-port 1.55
114 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
115 TestFunctional/parallel/ServiceCmd/List 0.49
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
117 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
120 TestFunctional/parallel/ServiceCmd/Format 0.54
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.47
124 TestFunctional/parallel/ServiceCmd/URL 0.69
125 TestFunctional/parallel/Version/short 0.08
126 TestFunctional/parallel/Version/components 0.6
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
131 TestFunctional/parallel/ImageCommands/ImageBuild 1.76
132 TestFunctional/parallel/ImageCommands/Setup 1.01
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.32
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.74
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.64
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.68
146 TestFunctional/parallel/ImageCommands/ImageRemove 1.01
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.1
149 TestFunctional/delete_addon-resizer_images 0.07
150 TestFunctional/delete_my-image_image 0.01
151 TestFunctional/delete_minikube_cached_images 0.01
155 TestIngressAddonLegacy/StartLegacyK8sCluster 65.97
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.23
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
162 TestJSONOutput/start/Command 66.95
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.64
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.57
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.71
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.22
187 TestKicCustomNetwork/create_custom_network 31.29
188 TestKicCustomNetwork/use_default_bridge_network 24.16
189 TestKicExistingNetwork 25.97
190 TestKicCustomSubnet 26.63
191 TestKicStaticIP 27.31
192 TestMainNoArgs 0.07
193 TestMinikubeProfile 49.87
196 TestMountStart/serial/StartWithMountFirst 5.23
197 TestMountStart/serial/VerifyMountFirst 0.24
198 TestMountStart/serial/StartWithMountSecond 8
199 TestMountStart/serial/VerifyMountSecond 0.25
200 TestMountStart/serial/DeleteFirst 1.58
201 TestMountStart/serial/VerifyMountPostDelete 0.25
202 TestMountStart/serial/Stop 1.21
203 TestMountStart/serial/RestartStopped 7.03
204 TestMountStart/serial/VerifyMountPostStop 0.24
207 TestMultiNode/serial/FreshStart2Nodes 84.17
208 TestMultiNode/serial/DeployApp2Nodes 3.66
210 TestMultiNode/serial/AddNode 15.94
211 TestMultiNode/serial/ProfileList 0.27
212 TestMultiNode/serial/CopyFile 8.93
213 TestMultiNode/serial/StopNode 2.08
214 TestMultiNode/serial/StartAfterStop 10.48
215 TestMultiNode/serial/RestartKeepsNodes 117.33
216 TestMultiNode/serial/DeleteNode 4.61
217 TestMultiNode/serial/StopMultiNode 23.82
218 TestMultiNode/serial/RestartMultiNode 72.75
219 TestMultiNode/serial/ValidateNameConflict 25.8
224 TestPreload 144.06
226 TestScheduledStopUnix 99.42
229 TestInsufficientStorage 12.91
232 TestKubernetesUpgrade 347.4
233 TestMissingContainerUpgrade 142.26
235 TestStoppedBinaryUpgrade/Setup 0.45
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
237 TestNoKubernetes/serial/StartWithK8s 34.68
239 TestNoKubernetes/serial/StartWithStopK8s 7.8
240 TestNoKubernetes/serial/Start 10.08
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
242 TestNoKubernetes/serial/ProfileList 1.4
243 TestNoKubernetes/serial/Stop 1.22
244 TestNoKubernetes/serial/StartNoArgs 9.3
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
247 TestPause/serial/Start 71.08
248 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
259 TestPause/serial/SecondStartNoReconfiguration 42.11
264 TestNetworkPlugins/group/false 3.74
268 TestPause/serial/Pause 0.9
269 TestPause/serial/VerifyStatus 0.41
270 TestPause/serial/Unpause 1.08
271 TestPause/serial/PauseAgain 0.84
272 TestPause/serial/DeletePaused 2.78
273 TestPause/serial/VerifyDeletedResources 13.25
275 TestStartStop/group/old-k8s-version/serial/FirstStart 114.55
277 TestStartStop/group/no-preload/serial/FirstStart 61.88
278 TestStartStop/group/no-preload/serial/DeployApp 8.34
279 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
280 TestStartStop/group/no-preload/serial/Stop 11.86
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
282 TestStartStop/group/no-preload/serial/SecondStart 336.98
283 TestStartStop/group/old-k8s-version/serial/DeployApp 8.49
284 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.74
285 TestStartStop/group/old-k8s-version/serial/Stop 11.87
286 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
287 TestStartStop/group/old-k8s-version/serial/SecondStart 419.55
289 TestStartStop/group/embed-certs/serial/FirstStart 38.08
291 TestStartStop/group/newest-cni/serial/FirstStart 35.64
292 TestStartStop/group/embed-certs/serial/DeployApp 7.35
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.84
294 TestStartStop/group/embed-certs/serial/Stop 11.9
295 TestStartStop/group/newest-cni/serial/DeployApp 0
296 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.77
297 TestStartStop/group/newest-cni/serial/Stop 1.22
298 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
299 TestStartStop/group/newest-cni/serial/SecondStart 26.42
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
301 TestStartStop/group/embed-certs/serial/SecondStart 332.11
302 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
303 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
304 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
305 TestStartStop/group/newest-cni/serial/Pause 2.48
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.28
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.93
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 341.9
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
316 TestStartStop/group/no-preload/serial/Pause 2.59
317 TestNetworkPlugins/group/auto/Start 68.71
318 TestNetworkPlugins/group/auto/KubeletFlags 0.26
319 TestNetworkPlugins/group/auto/NetCatPod 10.23
320 TestNetworkPlugins/group/auto/DNS 0.16
321 TestNetworkPlugins/group/auto/Localhost 0.13
322 TestNetworkPlugins/group/auto/HairPin 0.13
323 TestNetworkPlugins/group/kindnet/Start 71.82
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
327 TestStartStop/group/old-k8s-version/serial/Pause 2.98
328 TestNetworkPlugins/group/calico/Start 61.9
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.02
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
332 TestStartStop/group/embed-certs/serial/Pause 2.82
333 TestNetworkPlugins/group/custom-flannel/Start 58.62
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.2
335 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
336 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
337 TestNetworkPlugins/group/calico/ControllerPod 5.02
338 TestNetworkPlugins/group/kindnet/DNS 0.19
339 TestNetworkPlugins/group/kindnet/Localhost 0.15
340 TestNetworkPlugins/group/kindnet/HairPin 0.16
341 TestNetworkPlugins/group/calico/KubeletFlags 0.26
342 TestNetworkPlugins/group/calico/NetCatPod 10.34
343 TestNetworkPlugins/group/calico/DNS 0.2
344 TestNetworkPlugins/group/calico/Localhost 0.18
345 TestNetworkPlugins/group/calico/HairPin 0.18
346 TestNetworkPlugins/group/enable-default-cni/Start 42.64
347 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
348 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
349 TestNetworkPlugins/group/flannel/Start 60.71
350 TestNetworkPlugins/group/custom-flannel/DNS 0.17
351 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
352 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
353 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.02
354 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
355 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
357 TestNetworkPlugins/group/bridge/Start 37.17
358 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
359 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.26
360 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
361 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
362 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
363 TestNetworkPlugins/group/flannel/ControllerPod 5.02
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
365 TestNetworkPlugins/group/flannel/NetCatPod 10.27
366 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
367 TestNetworkPlugins/group/bridge/NetCatPod 9.25
368 TestNetworkPlugins/group/flannel/DNS 0.15
369 TestNetworkPlugins/group/flannel/Localhost 0.13
370 TestNetworkPlugins/group/flannel/HairPin 0.13
371 TestNetworkPlugins/group/bridge/DNS 32.51
372 TestNetworkPlugins/group/bridge/Localhost 0.13
373 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.16.0/json-events (10.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-798930 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-798930 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.21278588s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-798930
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-798930: exit status 85 (68.82371ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-798930 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:28 UTC |          |
	|         | -p download-only-798930        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/03 20:28:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1103 20:28:58.624470   11899 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:28:58.624600   11899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:28:58.624610   11899 out.go:309] Setting ErrFile to fd 2...
	I1103 20:28:58.624616   11899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:28:58.624796   11899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	W1103 20:28:58.624898   11899 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17545-5130/.minikube/config/config.json: open /home/jenkins/minikube-integration/17545-5130/.minikube/config/config.json: no such file or directory
	I1103 20:28:58.625467   11899 out.go:303] Setting JSON to true
	I1103 20:28:58.626268   11899 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":689,"bootTime":1699042650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 20:28:58.626347   11899 start.go:138] virtualization: kvm guest
	I1103 20:28:58.628782   11899 out.go:97] [download-only-798930] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 20:28:58.630396   11899 out.go:169] MINIKUBE_LOCATION=17545
	W1103 20:28:58.628898   11899 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball: no such file or directory
	I1103 20:28:58.628952   11899 notify.go:220] Checking for updates...
	I1103 20:28:58.633413   11899 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 20:28:58.635103   11899 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:28:58.636639   11899 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 20:28:58.638075   11899 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1103 20:28:58.640545   11899 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1103 20:28:58.640751   11899 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 20:28:58.661003   11899 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 20:28:58.661087   11899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:28:58.984898   11899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-03 20:28:58.976234372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:28:58.984980   11899 docker.go:295] overlay module found
	I1103 20:28:58.986570   11899 out.go:97] Using the docker driver based on user configuration
	I1103 20:28:58.986591   11899 start.go:298] selected driver: docker
	I1103 20:28:58.986596   11899 start.go:902] validating driver "docker" against <nil>
	I1103 20:28:58.986666   11899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:28:59.039529   11899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-03 20:28:59.031928249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:28:59.039682   11899 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1103 20:28:59.040167   11899 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1103 20:28:59.040332   11899 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1103 20:28:59.042120   11899 out.go:169] Using Docker driver with root privileges
	I1103 20:28:59.043394   11899 cni.go:84] Creating CNI manager for ""
	I1103 20:28:59.043412   11899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1103 20:28:59.043424   11899 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1103 20:28:59.043434   11899 start_flags.go:323] config:
	{Name:download-only-798930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-798930 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:28:59.044765   11899 out.go:97] Starting control plane node download-only-798930 in cluster download-only-798930
	I1103 20:28:59.044784   11899 cache.go:121] Beginning downloading kic base image for docker with crio
	I1103 20:28:59.046038   11899 out.go:97] Pulling base image ...
	I1103 20:28:59.046056   11899 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1103 20:28:59.046096   11899 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon
	I1103 20:28:59.059965   11899 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 to local cache
	I1103 20:28:59.060112   11899 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local cache directory
	I1103 20:28:59.060201   11899 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 to local cache
	I1103 20:28:59.085453   11899 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1103 20:28:59.085470   11899 cache.go:56] Caching tarball of preloaded images
	I1103 20:28:59.085575   11899 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1103 20:28:59.087233   11899 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1103 20:28:59.087250   11899 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:28:59.120787   11899 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1103 20:29:03.044249   11899 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 as a tarball
	I1103 20:29:03.374016   11899 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:29:03.374099   11899 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:29:04.273982   11899 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1103 20:29:04.274298   11899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/download-only-798930/config.json ...
	I1103 20:29:04.274325   11899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/download-only-798930/config.json: {Name:mk93f034275a42c10db9c6104f1adf9d6eaae2ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1103 20:29:04.274498   11899 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1103 20:29:04.274648   11899 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-798930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (5.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-798930 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-798930 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.750747884s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (5.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-798930
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-798930: exit status 85 (70.823347ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-798930 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:28 UTC |          |
	|         | -p download-only-798930        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	| start   | -o=json --download-only        | download-only-798930 | jenkins | v1.32.0-beta.0 | 03 Nov 23 20:29 UTC |          |
	|         | -p download-only-798930        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/03 20:29:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1103 20:29:08.908656   12057 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:29:08.908803   12057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:29:08.908812   12057 out.go:309] Setting ErrFile to fd 2...
	I1103 20:29:08.908817   12057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:29:08.909034   12057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	W1103 20:29:08.909170   12057 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17545-5130/.minikube/config/config.json: open /home/jenkins/minikube-integration/17545-5130/.minikube/config/config.json: no such file or directory
	I1103 20:29:08.909605   12057 out.go:303] Setting JSON to true
	I1103 20:29:08.910393   12057 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":699,"bootTime":1699042650,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 20:29:08.910450   12057 start.go:138] virtualization: kvm guest
	I1103 20:29:08.912419   12057 out.go:97] [download-only-798930] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 20:29:08.914031   12057 out.go:169] MINIKUBE_LOCATION=17545
	I1103 20:29:08.912626   12057 notify.go:220] Checking for updates...
	I1103 20:29:08.916826   12057 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 20:29:08.918273   12057 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:29:08.919806   12057 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 20:29:08.921052   12057 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1103 20:29:08.923481   12057 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1103 20:29:08.923890   12057 config.go:182] Loaded profile config "download-only-798930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1103 20:29:08.923929   12057 start.go:810] api.Load failed for download-only-798930: filestore "download-only-798930": Docker machine "download-only-798930" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1103 20:29:08.924008   12057 driver.go:378] Setting default libvirt URI to qemu:///system
	W1103 20:29:08.924040   12057 start.go:810] api.Load failed for download-only-798930: filestore "download-only-798930": Docker machine "download-only-798930" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1103 20:29:08.944175   12057 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 20:29:08.944241   12057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:29:08.995690   12057 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-11-03 20:29:08.987576058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:29:08.995776   12057 docker.go:295] overlay module found
	I1103 20:29:08.997558   12057 out.go:97] Using the docker driver based on existing profile
	I1103 20:29:08.997587   12057 start.go:298] selected driver: docker
	I1103 20:29:08.997603   12057 start.go:902] validating driver "docker" against &{Name:download-only-798930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-798930 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:29:08.997737   12057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:29:09.045252   12057 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-11-03 20:29:09.037915218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:29:09.045865   12057 cni.go:84] Creating CNI manager for ""
	I1103 20:29:09.045882   12057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1103 20:29:09.045893   12057 start_flags.go:323] config:
	{Name:download-only-798930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-798930 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1103 20:29:09.047673   12057 out.go:97] Starting control plane node download-only-798930 in cluster download-only-798930
	I1103 20:29:09.047695   12057 cache.go:121] Beginning downloading kic base image for docker with crio
	I1103 20:29:09.048899   12057 out.go:97] Pulling base image ...
	I1103 20:29:09.048933   12057 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:29:09.048985   12057 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local docker daemon
	I1103 20:29:09.063993   12057 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 to local cache
	I1103 20:29:09.064104   12057 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local cache directory
	I1103 20:29:09.064122   12057 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 in local cache directory, skipping pull
	I1103 20:29:09.064127   12057 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 exists in cache, skipping pull
	I1103 20:29:09.064146   12057 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 as a tarball
	I1103 20:29:09.086941   12057 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1103 20:29:09.086960   12057 cache.go:56] Caching tarball of preloaded images
	I1103 20:29:09.087076   12057 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:29:09.088903   12057 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1103 20:29:09.088925   12057 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:29:09.121084   12057 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1103 20:29:12.784200   12057 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:29:12.784282   12057 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17545-5130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1103 20:29:13.712599   12057 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1103 20:29:13.712727   12057 profile.go:148] Saving config to /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/download-only-798930/config.json ...
	I1103 20:29:13.712924   12057 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1103 20:29:13.713089   12057 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17545-5130/.minikube/cache/linux/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-798930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-798930
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.26s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-639246 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-639246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-639246
--- PASS: TestDownloadOnlyKic (1.26s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-580639 --alsologtostderr --binary-mirror http://127.0.0.1:33755 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-580639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-580639
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (91.22s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-496010 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-496010 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m24.956992774s)
helpers_test.go:175: Cleaning up "offline-crio-496010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-496010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-496010: (6.262752005s)
--- PASS: TestOffline (91.22s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-643880
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-643880: exit status 85 (63.071763ms)

                                                
                                                
-- stdout --
	* Profile "addons-643880" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-643880"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-643880
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-643880: exit status 85 (65.141164ms)

                                                
                                                
-- stdout --
	* Profile "addons-643880" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-643880"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (136.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-643880 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-643880 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m16.70924607s)
--- PASS: TestAddons/Setup (136.71s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 10.550749ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-g745q" [f1dfd4a5-9963-4985-98c6-e7427baa25ef] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011894953s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4xwcw" [5a22de6a-dd81-41fc-a1a7-9bbdf76955e8] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011919288s
addons_test.go:339: (dbg) Run:  kubectl --context addons-643880 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-643880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-643880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.618697741s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 ip
2023/11/03 20:31:46 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vkdx2" [701746e6-6e89-43c2-ab11-b717dcf42c38] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017131839s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-643880
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-643880: (5.641558915s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 10.96772ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-n4gbx" [c63f6ef8-4bcb-47d0-ad6a-5f786174932e] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01217354s
addons_test.go:414: (dbg) Run:  kubectl --context addons-643880 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.28s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 2.6345ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-f4x9k" [927a73a8-f1f2-42ae-9cf5-29fd998a00ad] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010696826s
addons_test.go:472: (dbg) Run:  kubectl --context addons-643880 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-643880 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.776040248s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (101.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 11.106825ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-643880 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-643880 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [99f73050-2285-4312-98bd-080141921951] Pending
helpers_test.go:344: "task-pv-pod" [99f73050-2285-4312-98bd-080141921951] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [99f73050-2285-4312-98bd-080141921951] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.009729643s
addons_test.go:583: (dbg) Run:  kubectl --context addons-643880 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-643880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-643880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-643880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-643880 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-643880 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-643880 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-643880 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6262432f-268f-4551-8136-86098c0a47cc] Pending
helpers_test.go:344: "task-pv-pod-restore" [6262432f-268f-4551-8136-86098c0a47cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6262432f-268f-4551-8136-86098c0a47cc] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.011378603s
addons_test.go:625: (dbg) Run:  kubectl --context addons-643880 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-643880 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-643880 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-643880 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.583984176s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (101.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-643880 --alsologtostderr -v=1
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-2gpnt" [41109eae-8ef0-4bb4-a735-d8ce8dce705a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-2gpnt" [41109eae-8ef0-4bb4-a735-d8ce8dce705a] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.087478826s
--- PASS: TestAddons/parallel/Headlamp (12.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-2hqdp" [32402633-6e0d-4e19-84c2-f8939aebca6e] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008171368s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-643880
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-643880 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-643880 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [91b77c04-2be4-4d81-9e0d-339d8824804b] Pending
helpers_test.go:344: "test-local-path" [91b77c04-2be4-4d81-9e0d-339d8824804b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [91b77c04-2be4-4d81-9e0d-339d8824804b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [91b77c04-2be4-4d81-9e0d-339d8824804b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.009947529s
addons_test.go:890: (dbg) Run:  kubectl --context addons-643880 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 ssh "cat /opt/local-path-provisioner/pvc-e599eada-6185-4b3f-9f52-d42b11fb9454_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-643880 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-643880 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-643880 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ss2kh" [1eb8f77f-3488-42c5-86e7-82bdacdc4a40] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.011512907s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-643880
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-643880 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-643880 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-643880
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-643880: (11.863640168s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-643880
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-643880
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-643880
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (26.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-496392 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1103 21:01:04.817596   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-496392 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.183192551s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-496392 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-496392 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-496392 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-496392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-496392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-496392: (1.942337865s)
--- PASS: TestCertOptions (26.73s)

                                                
                                    
x
+
TestCertExpiration (230.89s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-040215 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-040215 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.277001313s)
E1103 21:02:34.414742   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-040215 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-040215 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.625265963s)
helpers_test.go:175: Cleaning up "cert-expiration-040215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-040215
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-040215: (1.990092356s)
--- PASS: TestCertExpiration (230.89s)

                                                
                                    
x
+
TestForceSystemdFlag (29.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-783736 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-783736 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.950404126s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-783736 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-783736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-783736
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-783736: (2.247335249s)
--- PASS: TestForceSystemdFlag (29.46s)

                                                
                                    
x
+
TestForceSystemdEnv (42.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-591925 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-591925 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.846652175s)
helpers_test.go:175: Cleaning up "force-systemd-env-591925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-591925
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-591925: (2.556987454s)
--- PASS: TestForceSystemdEnv (42.40s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.86s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.86s)

                                                
                                    
x
+
TestErrorSpam/setup (21.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-254193 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-254193 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-254193 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-254193 --driver=docker  --container-runtime=crio: (21.182739955s)
--- PASS: TestErrorSpam/setup (21.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 stop: (1.189445648s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-254193 --log_dir /tmp/nospam-254193 stop
--- PASS: TestErrorSpam/stop (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17545-5130/.minikube/files/etc/test/nested/copy/11887/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573959 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-573959 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (43.362735091s)
--- PASS: TestFunctional/serial/StartWithProxy (43.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573959 --alsologtostderr -v=8
E1103 20:36:33.902536   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:33.908278   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:33.918621   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:33.938878   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:33.979117   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:34.059429   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:34.219871   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:34.540499   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:35.181389   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:36.462519   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:39.022878   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:36:44.143018   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-573959 --alsologtostderr -v=8: (39.104519505s)
functional_test.go:659: soft start took 39.105316735s for "functional-573959" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-573959 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-573959 /tmp/TestFunctionalserialCacheCmdcacheadd_local3254976276/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 cache add minikube-local-cache-test:functional-573959
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 cache delete minikube-local-cache-test:functional-573959
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-573959
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.005757ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 kubectl -- --context functional-573959 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-573959 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573959 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1103 20:36:54.383377   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 20:37:14.864512   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-573959 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.721802744s)
functional_test.go:757: restart took 32.721942667s for "functional-573959" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-573959 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 logs: (1.28957173s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 logs --file /tmp/TestFunctionalserialLogsFileCmd2284362465/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 logs --file /tmp/TestFunctionalserialLogsFileCmd2284362465/001/logs.txt: (1.304288852s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-573959 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-573959
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-573959: exit status 115 (317.361711ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30319 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-573959 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-573959 delete -f testdata/invalidsvc.yaml: (1.076985817s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 config get cpus: exit status 14 (78.833396ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 config get cpus: exit status 14 (95.703284ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-573959 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-573959 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45711: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573959 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-573959 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (222.163812ms)

                                                
                                                
-- stdout --
	* [functional-573959] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17545
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1103 20:37:35.989394   44927 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:37:35.989578   44927 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:37:35.989596   44927 out.go:309] Setting ErrFile to fd 2...
	I1103 20:37:35.989607   44927 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:37:35.989831   44927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 20:37:35.990383   44927 out.go:303] Setting JSON to false
	I1103 20:37:35.991698   44927 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1206,"bootTime":1699042650,"procs":531,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 20:37:35.991782   44927 start.go:138] virtualization: kvm guest
	I1103 20:37:35.994014   44927 out.go:177] * [functional-573959] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 20:37:35.996021   44927 out.go:177]   - MINIKUBE_LOCATION=17545
	I1103 20:37:35.997571   44927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 20:37:35.996103   44927 notify.go:220] Checking for updates...
	I1103 20:37:35.999231   44927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:37:36.003234   44927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 20:37:36.004920   44927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1103 20:37:36.006181   44927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1103 20:37:36.008046   44927 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:37:36.008731   44927 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 20:37:36.042715   44927 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 20:37:36.042862   44927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:37:36.122317   44927 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-03 20:37:36.110432844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:37:36.122449   44927 docker.go:295] overlay module found
	I1103 20:37:36.124763   44927 out.go:177] * Using the docker driver based on existing profile
	I1103 20:37:36.126433   44927 start.go:298] selected driver: docker
	I1103 20:37:36.126447   44927 start.go:902] validating driver "docker" against &{Name:functional-573959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-573959 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:37:36.126541   44927 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1103 20:37:36.129025   44927 out.go:177] 
	W1103 20:37:36.130674   44927 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1103 20:37:36.132233   44927 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573959 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573959 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-573959 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (193.297072ms)

                                                
                                                
-- stdout --
	* [functional-573959] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17545
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1103 20:37:35.781745   44791 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:37:35.781867   44791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:37:35.781875   44791 out.go:309] Setting ErrFile to fd 2...
	I1103 20:37:35.781879   44791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:37:35.782128   44791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 20:37:35.782626   44791 out.go:303] Setting JSON to false
	I1103 20:37:35.783770   44791 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1206,"bootTime":1699042650,"procs":534,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 20:37:35.783821   44791 start.go:138] virtualization: kvm guest
	I1103 20:37:35.786093   44791 out.go:177] * [functional-573959] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I1103 20:37:35.788023   44791 out.go:177]   - MINIKUBE_LOCATION=17545
	I1103 20:37:35.788024   44791 notify.go:220] Checking for updates...
	I1103 20:37:35.789445   44791 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 20:37:35.791096   44791 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 20:37:35.792493   44791 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 20:37:35.793792   44791 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1103 20:37:35.795115   44791 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1103 20:37:35.797106   44791 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:37:35.797777   44791 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 20:37:35.827104   44791 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 20:37:35.827180   44791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:37:35.901155   44791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-03 20:37:35.890130166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:37:35.901262   44791 docker.go:295] overlay module found
	I1103 20:37:35.904463   44791 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1103 20:37:35.906063   44791 start.go:298] selected driver: docker
	I1103 20:37:35.906078   44791 start.go:902] validating driver "docker" against &{Name:functional-573959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698881667-17516@sha256:966390c8d9b756c6e7044095f0ca5e5551da4c170cb501439eea24d1ad19bb89 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-573959 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1103 20:37:35.906204   44791 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1103 20:37:35.908699   44791 out.go:177] 
	W1103 20:37:35.910175   44791 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1103 20:37:35.911599   44791 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-573959 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-573959 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-jb8kj" [6d179089-ce9b-4884-8082-cc6b3ef86c2c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-jb8kj" [6d179089-ce9b-4884-8082-cc6b3ef86c2c] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.070505946s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32629
functional_test.go:1674: http://192.168.49.2:32629: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-jb8kj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32629
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [65040ce4-947c-4027-bdcc-e0be2490e447] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010550725s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-573959 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-573959 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-573959 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-573959 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ec04deb1-dcd8-4ce9-88c5-d28dad55c554] Pending
helpers_test.go:344: "sp-pod" [ec04deb1-dcd8-4ce9-88c5-d28dad55c554] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ec04deb1-dcd8-4ce9-88c5-d28dad55c554] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.010409796s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-573959 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-573959 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-573959 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [220a9117-0f93-4dc6-be0c-5ea9d63c1e0c] Pending
helpers_test.go:344: "sp-pod" [220a9117-0f93-4dc6-be0c-5ea9d63c1e0c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [220a9117-0f93-4dc6-be0c-5ea9d63c1e0c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011601516s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-573959 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh -n functional-573959 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 cp functional-573959:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd416293076/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh -n functional-573959 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-573959 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-pxggt" [9302d29a-8a58-4043-9275-0779994a91ce] Pending
helpers_test.go:344: "mysql-859648c796-pxggt" [9302d29a-8a58-4043-9275-0779994a91ce] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-pxggt" [9302d29a-8a58-4043-9275-0779994a91ce] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.010411529s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-573959 exec mysql-859648c796-pxggt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-573959 exec mysql-859648c796-pxggt -- mysql -ppassword -e "show databases;": exit status 1 (137.575434ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-573959 exec mysql-859648c796-pxggt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11887/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo cat /etc/test/nested/copy/11887/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11887.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo cat /etc/ssl/certs/11887.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11887.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo cat /usr/share/ca-certificates/11887.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/118872.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo cat /etc/ssl/certs/118872.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/118872.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo cat /usr/share/ca-certificates/118872.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-573959 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 ssh "sudo systemctl is-active docker": exit status 1 (447.337793ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 ssh "sudo systemctl is-active containerd": exit status 1 (298.212406ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-573959 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-573959 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-7rlcm" [2b18cb2e-104c-4a8e-a2cb-5891ecb9f6aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-7rlcm" [2b18cb2e-104c-4a8e-a2cb-5891ecb9f6aa] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.018068644s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdany-port3257854783/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699043854724437511" to /tmp/TestFunctionalparallelMountCmdany-port3257854783/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699043854724437511" to /tmp/TestFunctionalparallelMountCmdany-port3257854783/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699043854724437511" to /tmp/TestFunctionalparallelMountCmdany-port3257854783/001/test-1699043854724437511
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (310.506055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  3 20:37 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  3 20:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  3 20:37 test-1699043854724437511
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh cat /mount-9p/test-1699043854724437511
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-573959 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c0abb0e8-6963-4316-958b-fb55a7a05bef] Pending
helpers_test.go:344: "busybox-mount" [c0abb0e8-6963-4316-958b-fb55a7a05bef] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c0abb0e8-6963-4316-958b-fb55a7a05bef] Running
helpers_test.go:344: "busybox-mount" [c0abb0e8-6963-4316-958b-fb55a7a05bef] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c0abb0e8-6963-4316-958b-fb55a7a05bef] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.0418549s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-573959 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdany-port3257854783/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "358.790409ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "68.167153ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "342.446394ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "87.214858ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdspecific-port4002822515/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.952072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdspecific-port4002822515/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 ssh "sudo umount -f /mount-9p": exit status 1 (248.153123ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-573959 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdspecific-port4002822515/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3220586076/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3220586076/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3220586076/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T" /mount1: exit status 1 (377.478162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-573959 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3220586076/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3220586076/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573959 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3220586076/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 service list -o json
functional_test.go:1493: Took "486.094806ms" to run "out/minikube-linux-amd64 -p functional-573959 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 service --namespace=default --https --url hello-node
2023/11/03 20:37:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1521: found endpoint: https://192.168.49.2:30761
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-573959 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-573959 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-573959 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 47714: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-573959 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-573959 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-573959 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1eb01ab3-eac9-4c90-87a7-c21ac48c82a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1eb01ab3-eac9-4c90-87a7-c21ac48c82a3] Running
E1103 20:37:55.824814   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.07432466s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30761
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573959 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-573959
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573959 image ls --format short --alsologtostderr:
I1103 20:38:12.485077   51443 out.go:296] Setting OutFile to fd 1 ...
I1103 20:38:12.485277   51443 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:12.485298   51443 out.go:309] Setting ErrFile to fd 2...
I1103 20:38:12.485309   51443 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:12.485497   51443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
I1103 20:38:12.486091   51443 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:12.486230   51443 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:12.486641   51443 cli_runner.go:164] Run: docker container inspect functional-573959 --format={{.State.Status}}
I1103 20:38:12.504290   51443 ssh_runner.go:195] Run: systemctl --version
I1103 20:38:12.504339   51443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-573959
I1103 20:38:12.521571   51443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/functional-573959/id_rsa Username:docker}
I1103 20:38:12.609327   51443 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573959 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | alpine             | b135667c98980 | 49.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 547b3c3c15a96 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-573959  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| docker.io/library/nginx                 | latest             | c20060033e06f | 191MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573959 image ls --format table --alsologtostderr:
I1103 20:38:12.718514   51599 out.go:296] Setting OutFile to fd 1 ...
I1103 20:38:12.718663   51599 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:12.718672   51599 out.go:309] Setting ErrFile to fd 2...
I1103 20:38:12.718677   51599 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:12.718891   51599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
I1103 20:38:12.719684   51599 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:12.719833   51599 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:12.720443   51599 cli_runner.go:164] Run: docker container inspect functional-573959 --format={{.State.Status}}
I1103 20:38:12.740749   51599 ssh_runner.go:195] Run: systemctl --version
I1103 20:38:12.740802   51599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-573959
I1103 20:38:12.759998   51599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/functional-573959/id_rsa Username:docker}
I1103 20:38:12.848508   51599 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573959 image ls --format json --alsologtostderr:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647","repoDigests":["docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6","docker.io/library/nginx@sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoD
igests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"127165392"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["regi
stry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a94
39c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":["docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49538855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha2
56:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"6d1b4fd
1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"},{"id":"547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6","repoDigests":["docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9","docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519576537"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-573959"],"size":"34114467"},{"id":"82e4c8a736a4fcf22b
5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573959 image ls --format json --alsologtostderr:
I1103 20:38:12.494913   51444 out.go:296] Setting OutFile to fd 1 ...
I1103 20:38:12.495156   51444 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:12.495187   51444 out.go:309] Setting ErrFile to fd 2...
I1103 20:38:12.495205   51444 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:12.495511   51444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
I1103 20:38:12.496123   51444 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:12.496319   51444 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:12.496818   51444 cli_runner.go:164] Run: docker container inspect functional-573959 --format={{.State.Status}}
I1103 20:38:12.514202   51444 ssh_runner.go:195] Run: systemctl --version
I1103 20:38:12.514247   51444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-573959
I1103 20:38:12.534397   51444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/functional-573959/id_rsa Username:docker}
I1103 20:38:12.620188   51444 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573959 image ls --format yaml --alsologtostderr:
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests:
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
- docker.io/library/nginx@sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests:
- docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "49538855"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6
repoDigests:
- docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9
- docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d
repoTags:
- docker.io/library/mysql:5.7
size: "519576537"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-573959
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573959 image ls --format yaml --alsologtostderr:
I1103 20:38:12.500778   51442 out.go:296] Setting OutFile to fd 1 ...
I1103 20:38:12.501170   51442 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:12.501209   51442 out.go:309] Setting ErrFile to fd 2...
I1103 20:38:12.501231   51442 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:12.501898   51442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
I1103 20:38:12.502815   51442 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:12.503009   51442 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:12.503593   51442 cli_runner.go:164] Run: docker container inspect functional-573959 --format={{.State.Status}}
I1103 20:38:12.523151   51442 ssh_runner.go:195] Run: systemctl --version
I1103 20:38:12.523221   51442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-573959
I1103 20:38:12.545072   51442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/functional-573959/id_rsa Username:docker}
I1103 20:38:12.628482   51442 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573959 ssh pgrep buildkitd: exit status 1 (258.556693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image build -t localhost/my-image:functional-573959 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 image build -t localhost/my-image:functional-573959 testdata/build --alsologtostderr: (1.285415263s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573959 image build -t localhost/my-image:functional-573959 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8b9838c72a2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-573959
--> dbc2e5ff1b2
Successfully tagged localhost/my-image:functional-573959
dbc2e5ff1b26236ed1353756a0073f71e554d637e0761712c1fde0aee960ffe2
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573959 image build -t localhost/my-image:functional-573959 testdata/build --alsologtostderr:
I1103 20:38:13.018361   51737 out.go:296] Setting OutFile to fd 1 ...
I1103 20:38:13.018576   51737 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:13.018588   51737 out.go:309] Setting ErrFile to fd 2...
I1103 20:38:13.018595   51737 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1103 20:38:13.018842   51737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
I1103 20:38:13.019584   51737 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:13.020122   51737 config.go:182] Loaded profile config "functional-573959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1103 20:38:13.020564   51737 cli_runner.go:164] Run: docker container inspect functional-573959 --format={{.State.Status}}
I1103 20:38:13.036325   51737 ssh_runner.go:195] Run: systemctl --version
I1103 20:38:13.036371   51737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-573959
I1103 20:38:13.052800   51737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/functional-573959/id_rsa Username:docker}
I1103 20:38:13.136759   51737 build_images.go:151] Building image from path: /tmp/build.3800318179.tar
I1103 20:38:13.136843   51737 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1103 20:38:13.144863   51737 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3800318179.tar
I1103 20:38:13.147690   51737 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3800318179.tar: stat -c "%s %y" /var/lib/minikube/build/build.3800318179.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3800318179.tar': No such file or directory
I1103 20:38:13.147717   51737 ssh_runner.go:362] scp /tmp/build.3800318179.tar --> /var/lib/minikube/build/build.3800318179.tar (3072 bytes)
I1103 20:38:13.168066   51737 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3800318179
I1103 20:38:13.175533   51737 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3800318179 -xf /var/lib/minikube/build/build.3800318179.tar
I1103 20:38:13.183573   51737 crio.go:297] Building image: /var/lib/minikube/build/build.3800318179
I1103 20:38:13.183628   51737 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-573959 /var/lib/minikube/build/build.3800318179 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1103 20:38:14.225015   51737 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-573959 /var/lib/minikube/build/build.3800318179 --cgroup-manager=cgroupfs: (1.041361065s)
I1103 20:38:14.225060   51737 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3800318179
I1103 20:38:14.233031   51737 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3800318179.tar
I1103 20:38:14.240271   51737 build_images.go:207] Built localhost/my-image:functional-573959 from /tmp/build.3800318179.tar
I1103 20:38:14.240294   51737 build_images.go:123] succeeded building to: functional-573959
I1103 20:38:14.240298   51737 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-573959
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image load --daemon gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 image load --daemon gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr: (6.103213812s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image load --daemon gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 image load --daemon gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr: (2.531492s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-573959
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image load --daemon gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 image load --daemon gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr: (3.561091646s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-573959 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.11.133 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-573959 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image save gcr.io/google-containers/addon-resizer:functional-573959 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 image save gcr.io/google-containers/addon-resizer:functional-573959 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.680971857s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image rm gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-573959
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-573959 image save --daemon gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-573959 image save --daemon gcr.io/google-containers/addon-resizer:functional-573959 --alsologtostderr: (1.059998881s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-573959
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-573959
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-573959
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-573959
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (65.97s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-656945 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1103 20:39:17.745058   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-656945 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m5.967852446s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (65.97s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-656945 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-656945 addons enable ingress --alsologtostderr -v=5: (10.227213879s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-656945 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-464835 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1103 20:42:44.654756   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:42:54.895009   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:43:15.375435   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-464835 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.950263799s)
--- PASS: TestJSONOutput/start/Command (66.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-464835 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-464835 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-464835 --output=json --user=testUser
E1103 20:43:56.336538   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-464835 --output=json --user=testUser: (5.704912188s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-354659 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-354659 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.056973ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6293a5a6-b71f-4cd5-b699-a7262c858ef4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-354659] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad063631-528a-4644-8fbe-a43638f4550b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17545"}}
	{"specversion":"1.0","id":"d38688e9-2936-4777-ac3b-8c7040cf28c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f9f8792c-3203-429f-99ee-59f8a16b3f14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig"}}
	{"specversion":"1.0","id":"a1d66039-8aac-409f-92d0-079235f3d769","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube"}}
	{"specversion":"1.0","id":"44ebd67f-bc40-4d97-8785-075f1dd92f82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0ae2419a-4590-426a-868f-0823f6628233","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2336cf71-6e44-4b0d-80d8-d953ae7c998e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-354659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-354659
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-961306 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-961306 --network=: (29.243519696s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-961306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-961306
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-961306: (2.030723502s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-117767 --network=bridge
E1103 20:44:41.770862   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:41.776677   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:41.787011   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:41.807857   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:41.848191   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:41.928525   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:42.088900   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:42.409492   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:43.050692   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:44.330868   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:46.892667   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:44:52.013343   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-117767 --network=bridge: (22.245538956s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-117767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-117767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-117767: (1.900826816s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.16s)

                                                
                                    
x
+
TestKicExistingNetwork (25.97s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-872649 --network=existing-network
E1103 20:45:02.254235   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:45:18.257624   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-872649 --network=existing-network: (23.95814585s)
helpers_test.go:175: Cleaning up "existing-network-872649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-872649
E1103 20:45:22.735353   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-872649: (1.890659029s)
--- PASS: TestKicExistingNetwork (25.97s)

                                                
                                    
x
+
TestKicCustomSubnet (26.63s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-624387 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-624387 --subnet=192.168.60.0/24: (24.527002449s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-624387 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-624387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-624387
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-624387: (2.082926936s)
--- PASS: TestKicCustomSubnet (26.63s)

                                                
                                    
x
+
TestKicStaticIP (27.31s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-555493 --static-ip=192.168.200.200
E1103 20:46:03.696098   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-555493 --static-ip=192.168.200.200: (25.106062459s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-555493 ip
helpers_test.go:175: Cleaning up "static-ip-555493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-555493
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-555493: (2.06162869s)
--- PASS: TestKicStaticIP (27.31s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (49.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-780152 --driver=docker  --container-runtime=crio
E1103 20:46:33.902774   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-780152 --driver=docker  --container-runtime=crio: (21.352249812s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-783384 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-783384 --driver=docker  --container-runtime=crio: (23.824190391s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-780152
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-783384
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-783384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-783384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-783384: (1.857410256s)
helpers_test.go:175: Cleaning up "first-780152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-780152
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-780152: (1.846842086s)
--- PASS: TestMinikubeProfile (49.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-472100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-472100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.230595578s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-472100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-489412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-489412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.998480859s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489412 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-472100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-472100 --alsologtostderr -v=5: (1.583404085s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489412 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-489412
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-489412: (1.208598346s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-489412
E1103 20:47:25.616689   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-489412: (6.033132934s)
--- PASS: TestMountStart/serial/RestartStopped (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489412 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-280480 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1103 20:47:34.414217   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:48:02.098494   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-280480 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m23.742499736s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-280480 -- rollout status deployment/busybox: (1.860661012s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-5rnbm -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-z5cz8 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-5rnbm -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-z5cz8 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-5rnbm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-280480 -- exec busybox-5bc68d56bd-z5cz8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-280480 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-280480 -v 3 --alsologtostderr: (15.368166818s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.94s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp testdata/cp-test.txt multinode-280480:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2070318697/001/cp-test_multinode-280480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480:/home/docker/cp-test.txt multinode-280480-m02:/home/docker/cp-test_multinode-280480_multinode-280480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m02 "sudo cat /home/docker/cp-test_multinode-280480_multinode-280480-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480:/home/docker/cp-test.txt multinode-280480-m03:/home/docker/cp-test_multinode-280480_multinode-280480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m03 "sudo cat /home/docker/cp-test_multinode-280480_multinode-280480-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp testdata/cp-test.txt multinode-280480-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2070318697/001/cp-test_multinode-280480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480-m02:/home/docker/cp-test.txt multinode-280480:/home/docker/cp-test_multinode-280480-m02_multinode-280480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480 "sudo cat /home/docker/cp-test_multinode-280480-m02_multinode-280480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480-m02:/home/docker/cp-test.txt multinode-280480-m03:/home/docker/cp-test_multinode-280480-m02_multinode-280480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m03 "sudo cat /home/docker/cp-test_multinode-280480-m02_multinode-280480-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp testdata/cp-test.txt multinode-280480-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2070318697/001/cp-test_multinode-280480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480-m03:/home/docker/cp-test.txt multinode-280480:/home/docker/cp-test_multinode-280480-m03_multinode-280480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480 "sudo cat /home/docker/cp-test_multinode-280480-m03_multinode-280480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 cp multinode-280480-m03:/home/docker/cp-test.txt multinode-280480-m02:/home/docker/cp-test_multinode-280480-m03_multinode-280480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 ssh -n multinode-280480-m02 "sudo cat /home/docker/cp-test_multinode-280480-m03_multinode-280480-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-280480 node stop m03: (1.189949052s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-280480 status: exit status 7 (441.499173ms)

                                                
                                                
-- stdout --
	multinode-280480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-280480-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-280480-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-280480 status --alsologtostderr: exit status 7 (444.529832ms)

                                                
                                                
-- stdout --
	multinode-280480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-280480-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-280480-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1103 20:49:30.731377  111312 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:49:30.731523  111312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:49:30.731532  111312 out.go:309] Setting ErrFile to fd 2...
	I1103 20:49:30.731536  111312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:49:30.731736  111312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 20:49:30.731896  111312 out.go:303] Setting JSON to false
	I1103 20:49:30.731923  111312 mustload.go:65] Loading cluster: multinode-280480
	I1103 20:49:30.732029  111312 notify.go:220] Checking for updates...
	I1103 20:49:30.732306  111312 config.go:182] Loaded profile config "multinode-280480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:49:30.732320  111312 status.go:255] checking status of multinode-280480 ...
	I1103 20:49:30.732727  111312 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Status}}
	I1103 20:49:30.750201  111312 status.go:330] multinode-280480 host status = "Running" (err=<nil>)
	I1103 20:49:30.750221  111312 host.go:66] Checking if "multinode-280480" exists ...
	I1103 20:49:30.750427  111312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280480
	I1103 20:49:30.765687  111312 host.go:66] Checking if "multinode-280480" exists ...
	I1103 20:49:30.765929  111312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1103 20:49:30.765984  111312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480
	I1103 20:49:30.780755  111312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480/id_rsa Username:docker}
	I1103 20:49:30.864930  111312 ssh_runner.go:195] Run: systemctl --version
	I1103 20:49:30.868453  111312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:49:30.877951  111312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 20:49:30.930895  111312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-11-03 20:49:30.921936431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 20:49:30.931357  111312 kubeconfig.go:92] found "multinode-280480" server: "https://192.168.58.2:8443"
	I1103 20:49:30.931376  111312 api_server.go:166] Checking apiserver status ...
	I1103 20:49:30.931413  111312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1103 20:49:30.941209  111312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	I1103 20:49:30.949007  111312 api_server.go:182] apiserver freezer: "13:freezer:/docker/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/crio/crio-f3e43b89225befed808ca71f907255c3d98b0b5912c83e9bd090ca94324be5eb"
	I1103 20:49:30.949048  111312 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6561f5214f3b17505a43ff57c40d46ca1f1dcdf0e2d6bd8538c6a73879314ab8/crio/crio-f3e43b89225befed808ca71f907255c3d98b0b5912c83e9bd090ca94324be5eb/freezer.state
	I1103 20:49:30.956198  111312 api_server.go:204] freezer state: "THAWED"
	I1103 20:49:30.956214  111312 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1103 20:49:30.960299  111312 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1103 20:49:30.960319  111312 status.go:421] multinode-280480 apiserver status = Running (err=<nil>)
	I1103 20:49:30.960327  111312 status.go:257] multinode-280480 status: &{Name:multinode-280480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1103 20:49:30.960340  111312 status.go:255] checking status of multinode-280480-m02 ...
	I1103 20:49:30.960609  111312 cli_runner.go:164] Run: docker container inspect multinode-280480-m02 --format={{.State.Status}}
	I1103 20:49:30.976475  111312 status.go:330] multinode-280480-m02 host status = "Running" (err=<nil>)
	I1103 20:49:30.976522  111312 host.go:66] Checking if "multinode-280480-m02" exists ...
	I1103 20:49:30.976794  111312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-280480-m02
	I1103 20:49:30.992125  111312 host.go:66] Checking if "multinode-280480-m02" exists ...
	I1103 20:49:30.992399  111312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1103 20:49:30.992467  111312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-280480-m02
	I1103 20:49:31.007367  111312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17545-5130/.minikube/machines/multinode-280480-m02/id_rsa Username:docker}
	I1103 20:49:31.092717  111312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1103 20:49:31.102816  111312 status.go:257] multinode-280480-m02 status: &{Name:multinode-280480-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1103 20:49:31.102850  111312 status.go:255] checking status of multinode-280480-m03 ...
	I1103 20:49:31.103118  111312 cli_runner.go:164] Run: docker container inspect multinode-280480-m03 --format={{.State.Status}}
	I1103 20:49:31.119011  111312 status.go:330] multinode-280480-m03 host status = "Stopped" (err=<nil>)
	I1103 20:49:31.119039  111312 status.go:343] host is not running, skipping remaining checks
	I1103 20:49:31.119048  111312 status.go:257] multinode-280480-m03 status: &{Name:multinode-280480-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-280480 node start m03 --alsologtostderr: (9.834759875s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-280480
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-280480
E1103 20:49:41.771095   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-280480: (24.763208751s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-280480 --wait=true -v=8 --alsologtostderr
E1103 20:50:09.457242   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
E1103 20:51:33.902824   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-280480 --wait=true -v=8 --alsologtostderr: (1m32.445858401s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-280480
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-280480 node delete m03: (4.050752823s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-280480 stop: (23.638549818s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-280480 status: exit status 7 (91.920512ms)

                                                
                                                
-- stdout --
	multinode-280480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-280480-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-280480 status --alsologtostderr: exit status 7 (89.329312ms)

                                                
                                                
-- stdout --
	multinode-280480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-280480-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1103 20:52:07.328489  121462 out.go:296] Setting OutFile to fd 1 ...
	I1103 20:52:07.328640  121462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:52:07.328650  121462 out.go:309] Setting ErrFile to fd 2...
	I1103 20:52:07.328657  121462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 20:52:07.328871  121462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 20:52:07.329047  121462 out.go:303] Setting JSON to false
	I1103 20:52:07.329081  121462 mustload.go:65] Loading cluster: multinode-280480
	I1103 20:52:07.329185  121462 notify.go:220] Checking for updates...
	I1103 20:52:07.329582  121462 config.go:182] Loaded profile config "multinode-280480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 20:52:07.329603  121462 status.go:255] checking status of multinode-280480 ...
	I1103 20:52:07.330053  121462 cli_runner.go:164] Run: docker container inspect multinode-280480 --format={{.State.Status}}
	I1103 20:52:07.346047  121462 status.go:330] multinode-280480 host status = "Stopped" (err=<nil>)
	I1103 20:52:07.346077  121462 status.go:343] host is not running, skipping remaining checks
	I1103 20:52:07.346087  121462 status.go:257] multinode-280480 status: &{Name:multinode-280480 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1103 20:52:07.346132  121462 status.go:255] checking status of multinode-280480-m02 ...
	I1103 20:52:07.346352  121462 cli_runner.go:164] Run: docker container inspect multinode-280480-m02 --format={{.State.Status}}
	I1103 20:52:07.361682  121462 status.go:330] multinode-280480-m02 host status = "Stopped" (err=<nil>)
	I1103 20:52:07.361702  121462 status.go:343] host is not running, skipping remaining checks
	I1103 20:52:07.361708  121462 status.go:257] multinode-280480-m02 status: &{Name:multinode-280480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (72.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-280480 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1103 20:52:34.414158   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
E1103 20:52:56.948512   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-280480 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m12.188852703s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-280480 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (72.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-280480
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-280480-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-280480-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.690381ms)

                                                
                                                
-- stdout --
	* [multinode-280480-m02] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17545
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-280480-m02' is duplicated with machine name 'multinode-280480-m02' in profile 'multinode-280480'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-280480-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-280480-m03 --driver=docker  --container-runtime=crio: (23.554537779s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-280480
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-280480: exit status 80 (262.780963ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-280480
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-280480-m03 already exists in multinode-280480-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-280480-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-280480-m03: (1.84074667s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.80s)

                                                
                                    
x
+
TestPreload (144.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-978286 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1103 20:54:41.770649   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-978286 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m11.472522449s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-978286 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-978286
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-978286: (5.708527886s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-978286 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-978286 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m3.661625165s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-978286 image list
helpers_test.go:175: Cleaning up "test-preload-978286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-978286
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-978286: (2.246373735s)
--- PASS: TestPreload (144.06s)

                                                
                                    
x
+
TestScheduledStopUnix (99.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-499034 --memory=2048 --driver=docker  --container-runtime=crio
E1103 20:56:33.902859   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-499034 --memory=2048 --driver=docker  --container-runtime=crio: (23.996487367s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-499034 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-499034 -n scheduled-stop-499034
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-499034 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-499034 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-499034 -n scheduled-stop-499034
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-499034
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-499034 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1103 20:57:34.414512   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-499034
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-499034: exit status 7 (75.603254ms)

                                                
                                                
-- stdout --
	scheduled-stop-499034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-499034 -n scheduled-stop-499034
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-499034 -n scheduled-stop-499034: exit status 7 (74.489727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-499034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-499034
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-499034: (4.054354376s)
--- PASS: TestScheduledStopUnix (99.42s)

                                                
                                    
x
+
TestInsufficientStorage (12.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-903003 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-903003 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.54410707s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2474c9f0-3bc5-4450-bbfb-cf8256152285","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-903003] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"56498197-5c1c-42f0-ad0f-e0d9f0590214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17545"}}
	{"specversion":"1.0","id":"4de33d8d-da93-44a3-b17c-659d2264abda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d2823909-1906-4fbe-b51f-6b7cc507a35d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig"}}
	{"specversion":"1.0","id":"b35241b7-41da-49ee-a0c2-416099138d92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube"}}
	{"specversion":"1.0","id":"66f74fdb-c524-45dc-8dcb-7605b1989744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f5dd4cd7-5f49-4bb7-97cc-c11de5a62ca1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0966e5cb-ed15-4cb2-9a15-87107c19fba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3c0315f6-b37a-49c3-9210-22efb58a8d89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5c86fdfc-2721-439a-817a-59436691a333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ceb6d1a-bc44-404d-8ac3-276800de4ae6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0fc5ece0-3e79-4361-acde-5f384dae2076","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-903003 in cluster insufficient-storage-903003","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b186042-a162-4d5d-b4a0-dea7c832d328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"674ab1d2-04ce-463f-81ce-b7f481078b4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4bd4e5da-111c-45d2-be66-67df9473d5d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-903003 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-903003 --output=json --layout=cluster: exit status 7 (261.001625ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-903003","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-903003","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1103 20:58:05.695612  142935 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-903003" does not appear in /home/jenkins/minikube-integration/17545-5130/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-903003 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-903003 --output=json --layout=cluster: exit status 7 (260.314767ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-903003","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-903003","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1103 20:58:05.956330  143021 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-903003" does not appear in /home/jenkins/minikube-integration/17545-5130/kubeconfig
	E1103 20:58:05.965403  143021 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/insufficient-storage-903003/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-903003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-903003
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-903003: (1.841154823s)
--- PASS: TestInsufficientStorage (12.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-598527 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-598527 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.255072806s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-598527
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-598527: (1.243207571s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-598527 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-598527 status --format={{.Host}}: exit status 7 (88.835551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-598527 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-598527 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.973834783s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-598527 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-598527 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-598527 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (81.823729ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-598527] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17545
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-598527
	    minikube start -p kubernetes-upgrade-598527 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5985272 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-598527 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-598527 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1103 21:04:41.770728   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-598527 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.538453375s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-598527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-598527
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-598527: (2.162819913s)
--- PASS: TestKubernetesUpgrade (347.40s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.26s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2312109089.exe start -p missing-upgrade-508611 --memory=2200 --driver=docker  --container-runtime=crio
E1103 20:58:57.459324   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2312109089.exe start -p missing-upgrade-508611 --memory=2200 --driver=docker  --container-runtime=crio: (1m4.825171574s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-508611
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-508611: (11.09984296s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-508611
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-508611 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-508611 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.809494727s)
helpers_test.go:175: Cleaning up "missing-upgrade-508611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-508611
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-508611: (2.069763473s)
--- PASS: TestMissingContainerUpgrade (142.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-510960 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-510960 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (100.735194ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-510960] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17545
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-510960 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-510960 --driver=docker  --container-runtime=crio: (34.329002013s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-510960 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-510960 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-510960 --no-kubernetes --driver=docker  --container-runtime=crio: (5.257495036s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-510960 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-510960 status -o json: exit status 2 (358.343753ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-510960","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-510960
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-510960: (2.186281023s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-510960 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-510960 --no-kubernetes --driver=docker  --container-runtime=crio: (10.079787142s)
--- PASS: TestNoKubernetes/serial/Start (10.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-510960 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-510960 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.339565ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-510960
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-510960: (1.218422605s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-510960 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-510960 --driver=docker  --container-runtime=crio: (9.299631505s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-510960 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-510960 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.940813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPause/serial/Start (71.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-552269 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-552269 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m11.077745152s)
--- PASS: TestPause/serial/Start (71.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-519866
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-552269 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-552269 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.08344394s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-768120 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-768120 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (170.505596ms)

                                                
                                                
-- stdout --
	* [false-768120] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17545
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1103 21:00:53.777762  187130 out.go:296] Setting OutFile to fd 1 ...
	I1103 21:00:53.777938  187130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 21:00:53.777951  187130 out.go:309] Setting ErrFile to fd 2...
	I1103 21:00:53.777959  187130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1103 21:00:53.778219  187130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17545-5130/.minikube/bin
	I1103 21:00:53.778885  187130 out.go:303] Setting JSON to false
	I1103 21:00:53.780103  187130 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2604,"bootTime":1699042650,"procs":437,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1103 21:00:53.780160  187130 start.go:138] virtualization: kvm guest
	I1103 21:00:53.782416  187130 out.go:177] * [false-768120] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1103 21:00:53.783841  187130 out.go:177]   - MINIKUBE_LOCATION=17545
	I1103 21:00:53.783882  187130 notify.go:220] Checking for updates...
	I1103 21:00:53.785334  187130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1103 21:00:53.786729  187130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17545-5130/kubeconfig
	I1103 21:00:53.788053  187130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17545-5130/.minikube
	I1103 21:00:53.789546  187130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1103 21:00:53.791012  187130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1103 21:00:53.792831  187130 config.go:182] Loaded profile config "kubernetes-upgrade-598527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 21:00:53.792943  187130 config.go:182] Loaded profile config "missing-upgrade-508611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1103 21:00:53.793073  187130 config.go:182] Loaded profile config "pause-552269": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1103 21:00:53.793151  187130 driver.go:378] Setting default libvirt URI to qemu:///system
	I1103 21:00:53.817541  187130 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1103 21:00:53.817650  187130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1103 21:00:53.877887  187130 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:74 SystemTime:2023-11-03 21:00:53.868997872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1103 21:00:53.877991  187130 docker.go:295] overlay module found
	I1103 21:00:53.880826  187130 out.go:177] * Using the docker driver based on user configuration
	I1103 21:00:53.882086  187130 start.go:298] selected driver: docker
	I1103 21:00:53.882095  187130 start.go:902] validating driver "docker" against <nil>
	I1103 21:00:53.882106  187130 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1103 21:00:53.884152  187130 out.go:177] 
	W1103 21:00:53.885425  187130 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1103 21:00:53.886721  187130 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-768120 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-768120" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-598527
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
server: https://192.168.85.2:8443
name: missing-upgrade-508611
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-552269
contexts:
- context:
cluster: kubernetes-upgrade-598527
user: kubernetes-upgrade-598527
name: kubernetes-upgrade-598527
- context:
cluster: missing-upgrade-508611
user: missing-upgrade-508611
name: missing-upgrade-508611
- context:
cluster: pause-552269
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: context_info
namespace: default
user: pause-552269
name: pause-552269
current-context: kubernetes-upgrade-598527
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-598527
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/kubernetes-upgrade-598527/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/kubernetes-upgrade-598527/client.key
- name: missing-upgrade-508611
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/missing-upgrade-508611/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/missing-upgrade-508611/client.key
- name: pause-552269
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/pause-552269/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/pause-552269/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-768120

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-768120"

                                                
                                                
----------------------- debugLogs end: false-768120 [took: 3.381418443s] --------------------------------
helpers_test.go:175: Cleaning up "false-768120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-768120
--- PASS: TestNetworkPlugins/group/false (3.74s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-552269 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-552269 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-552269 --output=json --layout=cluster: exit status 2 (408.809711ms)

                                                
                                                
-- stdout --
	{"Name":"pause-552269","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-552269","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-552269 --alsologtostderr -v=5
E1103 21:01:33.902968   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-552269 --alsologtostderr -v=5: (1.080812987s)
--- PASS: TestPause/serial/Unpause (1.08s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-552269 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-552269 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-552269 --alsologtostderr -v=5: (2.778610177s)
--- PASS: TestPause/serial/DeletePaused (2.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.200566351s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-552269
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-552269: exit status 1 (14.981601ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-552269: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-351184 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-351184 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m54.54810241s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-716262 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-716262 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m1.883500209s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-716262 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5c75a296-6a59-43a5-a29d-bee74e191127] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5c75a296-6a59-43a5-a29d-bee74e191127] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.015119779s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-716262 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-716262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-716262 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-716262 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-716262 --alsologtostderr -v=3: (11.862808621s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-716262 -n no-preload-716262
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-716262 -n no-preload-716262: exit status 7 (74.498109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-716262 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-716262 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-716262 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m36.62545953s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-716262 -n no-preload-716262
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-351184 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1bb27156-32de-4c1d-b111-2e42c27af234] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1bb27156-32de-4c1d-b111-2e42c27af234] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.020036281s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-351184 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-351184 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-351184 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-351184 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-351184 --alsologtostderr -v=3: (11.865844124s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-351184 -n old-k8s-version-351184
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-351184 -n old-k8s-version-351184: exit status 7 (91.045344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-351184 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (419.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-351184 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-351184 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m59.214641927s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-351184 -n old-k8s-version-351184
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (419.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (38.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-952287 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-952287 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (38.074926334s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (38.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-345485 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-345485 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (35.638170398s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-952287 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b600ffe7-cf0f-492e-b3aa-2cf96f8958af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b600ffe7-cf0f-492e-b3aa-2cf96f8958af] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.014773619s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-952287 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-952287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-952287 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-952287 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-952287 --alsologtostderr -v=3: (11.903394256s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-345485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-345485 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-345485 --alsologtostderr -v=3: (1.218340268s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-345485 -n newest-cni-345485
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-345485 -n newest-cni-345485: exit status 7 (74.87832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-345485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-345485 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-345485 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (26.110453683s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-345485 -n newest-cni-345485
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-952287 -n embed-certs-952287
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-952287 -n embed-certs-952287: exit status 7 (79.290013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-952287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (332.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-952287 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-952287 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m31.742247279s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-952287 -n embed-certs-952287
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (332.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-345485 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-345485 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-345485 -n newest-cni-345485
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-345485 -n newest-cni-345485: exit status 2 (286.266297ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-345485 -n newest-cni-345485
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-345485 -n newest-cni-345485: exit status 2 (296.918267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-345485 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-345485 -n newest-cni-345485
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-345485 -n newest-cni-345485
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-051380 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1103 21:06:33.902438   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-051380 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (37.276703948s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-051380 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [00856425-03f9-499d-b4bf-76cb0b382f75] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [00856425-03f9-499d-b4bf-76cb0b382f75] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014212301s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-051380 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-051380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-051380 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-051380 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-051380 --alsologtostderr -v=3: (11.9325234s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380: exit status 7 (78.305243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-051380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-051380 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1103 21:07:34.414815   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/functional-573959/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-051380 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m41.313014199s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nmvtb" [4be99116-2db9-4e3c-a93d-4f04525a8ca1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nmvtb" [4be99116-2db9-4e3c-a93d-4f04525a8ca1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.017147838s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nmvtb" [4be99116-2db9-4e3c-a93d-4f04525a8ca1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008498286s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-716262 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-716262 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-716262 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-716262 -n no-preload-716262
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-716262 -n no-preload-716262: exit status 2 (298.401653ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-716262 -n no-preload-716262
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-716262 -n no-preload-716262: exit status 2 (289.667835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-716262 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-716262 -n no-preload-716262
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-716262 -n no-preload-716262
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1103 21:09:36.949157   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
E1103 21:09:41.770852   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/ingress-addon-legacy-656945/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m8.71152454s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-768120 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-768120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-swph7" [3a828f7d-de16-44de-93ad-3b03186a4e71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-swph7" [3a828f7d-de16-44de-93ad-3b03186a4e71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.008107928s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-768120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.824900409s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-j4dfq" [eed51fea-7736-438d-90b5-fcc7d4a5edbe] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015627508s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-j4dfq" [eed51fea-7736-438d-90b5-fcc7d4a5edbe] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009668187s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-351184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-351184 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-351184 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-351184 -n old-k8s-version-351184
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-351184 -n old-k8s-version-351184: exit status 2 (321.525966ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-351184 -n old-k8s-version-351184
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-351184 -n old-k8s-version-351184: exit status 2 (340.163798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-351184 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-351184 -n old-k8s-version-351184
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-351184 -n old-k8s-version-351184
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m1.902392142s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2xhk2" [19255371-9ffd-4ec8-b2a0-b7b88987da41] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1103 21:11:33.903265   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/addons-643880/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2xhk2" [19255371-9ffd-4ec8-b2a0-b7b88987da41] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.015634626s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2xhk2" [19255371-9ffd-4ec8-b2a0-b7b88987da41] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00882392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-952287 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-952287 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-952287 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-952287 -n embed-certs-952287
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-952287 -n embed-certs-952287: exit status 2 (305.375467ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-952287 -n embed-certs-952287
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-952287 -n embed-certs-952287: exit status 2 (323.807081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-952287 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-952287 -n embed-certs-952287
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-952287 -n embed-certs-952287
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (58.620803273s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vjttm" [290e1fbd-0b35-4f19-8932-d8ff08fbdeb4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.144981376s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-768120 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-768120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6xw4c" [2d01b90f-f4cc-4f73-a287-1f2b013259ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6xw4c" [2d01b90f-f4cc-4f73-a287-1f2b013259ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.008989747s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qbmq2" [5a60138f-c4ad-44e2-8f25-e899e7acba4e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018242582s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-768120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-768120 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-768120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qstf6" [4b6abb3d-2fec-4170-a60a-92959a9666d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qstf6" [4b6abb3d-2fec-4170-a60a-92959a9666d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.010415583s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-768120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.634991824s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-768120 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-768120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rmsb2" [7ed10e3b-43ab-4b8a-8ac3-8f51dd6fb7a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1103 21:12:53.733346   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:53.739300   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:53.749930   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:53.770834   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:53.811846   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:53.892027   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:54.052602   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:54.373166   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:55.014068   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:12:56.294661   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-rmsb2" [7ed10e3b-43ab-4b8a-8ac3-8f51dd6fb7a8] Running
E1103 21:12:58.855801   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010682916s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.711830364s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-768120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nqqrc" [c48b8186-7a54-41f4-83c5-75061f6e40a0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1103 21:13:14.358060   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nqqrc" [c48b8186-7a54-41f4-83c5-75061f6e40a0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.022387745s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-768120 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-768120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kshrn" [1ecff733-0bd0-47c7-92b6-dfd93f21f4e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kshrn" [1ecff733-0bd0-47c7-92b6-dfd93f21f4e8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.009301651s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nqqrc" [c48b8186-7a54-41f4-83c5-75061f6e40a0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009480968s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-051380 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-768120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.17144231s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-051380 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-051380 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380: exit status 2 (333.333737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380: exit status 2 (392.180343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-051380 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-051380 -n default-k8s-diff-port-051380
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.26s)
E1103 21:13:36.786198   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:36.792478   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:36.803140   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:36.824037   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:36.864403   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:36.944948   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:37.105763   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:37.426403   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:38.067233   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:39.347362   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
E1103 21:13:41.908176   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-768120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vpx88" [de3e4663-a4a5-4df5-ba13-d4e853a14923] Running
E1103 21:13:57.269838   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.017133416s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-768120 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-768120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kxg9c" [61a1d623-e7a2-490f-be76-8566adc02718] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kxg9c" [61a1d623-e7a2-490f-be76-8566adc02718] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008434361s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-768120 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-768120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v9kbj" [43118596-d837-418a-a347-87866e77f2cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v9kbj" [43118596-d837-418a-a347-87866e77f2cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.008843316s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-768120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (32.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-768120 exec deployment/netcat -- nslookup kubernetes.default
E1103 21:14:15.798883   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/no-preload-716262/client.crt: no such file or directory
E1103 21:14:17.750303   11887 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/old-k8s-version-351184/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-768120 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136579811s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-768120 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-768120 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15009451s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-768120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (32.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-768120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (24/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-699129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-699129
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-768120 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-768120" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-598527
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
server: https://127.0.0.1:32933
name: missing-upgrade-508611
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-552269
contexts:
- context:
cluster: kubernetes-upgrade-598527
user: kubernetes-upgrade-598527
name: kubernetes-upgrade-598527
- context:
cluster: missing-upgrade-508611
user: missing-upgrade-508611
name: missing-upgrade-508611
- context:
cluster: pause-552269
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: context_info
namespace: default
user: pause-552269
name: pause-552269
current-context: kubernetes-upgrade-598527
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-598527
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/kubernetes-upgrade-598527/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/kubernetes-upgrade-598527/client.key
- name: missing-upgrade-508611
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/missing-upgrade-508611/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/missing-upgrade-508611/client.key
- name: pause-552269
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/pause-552269/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/pause-552269/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-768120

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-768120"

                                                
                                                
----------------------- debugLogs end: kubenet-768120 [took: 4.411400119s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-768120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-768120
--- SKIP: TestNetworkPlugins/group/kubenet (4.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-768120 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-768120" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-598527
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
server: https://192.168.85.2:8443
name: missing-upgrade-508611
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17545-5130/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-552269
contexts:
- context:
cluster: kubernetes-upgrade-598527
user: kubernetes-upgrade-598527
name: kubernetes-upgrade-598527
- context:
cluster: missing-upgrade-508611
user: missing-upgrade-508611
name: missing-upgrade-508611
- context:
cluster: pause-552269
extensions:
- extension:
last-update: Fri, 03 Nov 2023 21:00:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: context_info
namespace: default
user: pause-552269
name: pause-552269
current-context: kubernetes-upgrade-598527
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-598527
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/kubernetes-upgrade-598527/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/kubernetes-upgrade-598527/client.key
- name: missing-upgrade-508611
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/missing-upgrade-508611/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/missing-upgrade-508611/client.key
- name: pause-552269
user:
client-certificate: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/pause-552269/client.crt
client-key: /home/jenkins/minikube-integration/17545-5130/.minikube/profiles/pause-552269/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-768120

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-768120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-768120"

                                                
                                                
----------------------- debugLogs end: cilium-768120 [took: 4.414434287s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-768120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-768120
--- SKIP: TestNetworkPlugins/group/cilium (4.58s)

                                                
                                    
Copied to clipboard