Test Report: Docker_Linux_crio 17866

                    
                      8c6a2e99755a9a0a7d8f4ed404c065becb2fd234:2024-01-08:32612
                    
                

Test fail (5/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 152.31
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 176.17
217 TestMultiNode/serial/PingHostFrom2Pods 3.1
239 TestRunningBinaryUpgrade 70.92
247 TestStoppedBinaryUpgrade/Upgrade 90.48
x
+
TestAddons/parallel/Ingress (152.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-954584 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-954584 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-954584 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e32cfb28-0d6f-4011-81d6-7cae419878bd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e32cfb28-0d6f-4011-81d6-7cae419878bd] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003860388s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-954584 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.735659578s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-954584 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-954584 addons disable ingress --alsologtostderr -v=1: (7.625726068s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-954584
helpers_test.go:235: (dbg) docker inspect addons-954584:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e96d226e35e9edefe37da6406b1ca9031e05a066c9e0223fe573806ce93515e",
	        "Created": "2024-01-08T21:09:42.898862752Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 158274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T21:09:43.186969482Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:127d4e2273d98a7f5001d818ad9d78fbfe93f6fb3b59e0136dea97a2dd09d9f5",
	        "ResolvConfPath": "/var/lib/docker/containers/0e96d226e35e9edefe37da6406b1ca9031e05a066c9e0223fe573806ce93515e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e96d226e35e9edefe37da6406b1ca9031e05a066c9e0223fe573806ce93515e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e96d226e35e9edefe37da6406b1ca9031e05a066c9e0223fe573806ce93515e/hosts",
	        "LogPath": "/var/lib/docker/containers/0e96d226e35e9edefe37da6406b1ca9031e05a066c9e0223fe573806ce93515e/0e96d226e35e9edefe37da6406b1ca9031e05a066c9e0223fe573806ce93515e-json.log",
	        "Name": "/addons-954584",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-954584:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-954584",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4ca3420c5ff8f62c24a17f649261c32c4e42e4a88840c0ee080be94ce98314ce-init/diff:/var/lib/docker/overlay2/36c91ea73c875a756d19f8a4637b501585f27b26abca7b178ac0d11596ac7a7f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4ca3420c5ff8f62c24a17f649261c32c4e42e4a88840c0ee080be94ce98314ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4ca3420c5ff8f62c24a17f649261c32c4e42e4a88840c0ee080be94ce98314ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4ca3420c5ff8f62c24a17f649261c32c4e42e4a88840c0ee080be94ce98314ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-954584",
	                "Source": "/var/lib/docker/volumes/addons-954584/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-954584",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-954584",
	                "name.minikube.sigs.k8s.io": "addons-954584",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de8735baec3f6fb3f100c63b5de87d3e22805bed422567923f577a0849baeff1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/de8735baec3f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-954584": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e96d226e35e",
	                        "addons-954584"
	                    ],
	                    "NetworkID": "52edb1d226373112401d4f9a225035b24992f2179dccf1a759872edcfeff946c",
	                    "EndpointID": "a99fa3da6447e1159d9af8b2cac4eaa1f2bf01fd9ef123f046584ea3590f676a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-954584 -n addons-954584
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-954584 logs -n 25: (1.181156397s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-423014                                                                     | download-only-423014   | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC | 08 Jan 24 21:09 UTC |
	| delete  | -p download-only-423014                                                                     | download-only-423014   | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC | 08 Jan 24 21:09 UTC |
	| start   | --download-only -p                                                                          | download-docker-400635 | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |                     |
	|         | download-docker-400635                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-400635                                                                   | download-docker-400635 | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC | 08 Jan 24 21:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-735016   | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |                     |
	|         | binary-mirror-735016                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36943                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-735016                                                                     | binary-mirror-735016   | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC | 08 Jan 24 21:09 UTC |
	| addons  | disable dashboard -p                                                                        | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |                     |
	|         | addons-954584                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |                     |
	|         | addons-954584                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-954584 --wait=true                                                                | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC | 08 Jan 24 21:11 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:11 UTC | 08 Jan 24 21:11 UTC |
	|         | -p addons-954584                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-954584 addons disable                                                                | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:11 UTC | 08 Jan 24 21:11 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-954584 ip                                                                            | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:11 UTC | 08 Jan 24 21:11 UTC |
	| addons  | addons-954584 addons disable                                                                | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:11 UTC | 08 Jan 24 21:11 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:11 UTC | 08 Jan 24 21:11 UTC |
	|         | -p addons-954584                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:11 UTC | 08 Jan 24 21:12 UTC |
	|         | addons-954584                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-954584 ssh cat                                                                       | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:12 UTC | 08 Jan 24 21:12 UTC |
	|         | /opt/local-path-provisioner/pvc-6097a29d-4577-4b55-9867-558bcd95400c_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-954584 addons disable                                                                | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:12 UTC | 08 Jan 24 21:12 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:12 UTC | 08 Jan 24 21:12 UTC |
	|         | addons-954584                                                                               |                        |         |         |                     |                     |
	| addons  | addons-954584 addons                                                                        | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:12 UTC | 08 Jan 24 21:12 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-954584 ssh curl -s                                                                   | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:12 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-954584 addons                                                                        | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:12 UTC | 08 Jan 24 21:12 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-954584 addons                                                                        | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:12 UTC | 08 Jan 24 21:12 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-954584 ip                                                                            | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:14 UTC | 08 Jan 24 21:14 UTC |
	| addons  | addons-954584 addons disable                                                                | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:14 UTC | 08 Jan 24 21:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-954584 addons disable                                                                | addons-954584          | jenkins | v1.32.0 | 08 Jan 24 21:14 UTC | 08 Jan 24 21:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:09:19
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:09:19.133962  157655 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:09:19.134080  157655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:19.134089  157655 out.go:309] Setting ErrFile to fd 2...
	I0108 21:09:19.134094  157655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:19.134286  157655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:09:19.134921  157655 out.go:303] Setting JSON to false
	I0108 21:09:19.135840  157655 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13911,"bootTime":1704734248,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:09:19.135906  157655 start.go:138] virtualization: kvm guest
	I0108 21:09:19.138086  157655 out.go:177] * [addons-954584] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:09:19.139566  157655 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:09:19.140833  157655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:09:19.139637  157655 notify.go:220] Checking for updates...
	I0108 21:09:19.143326  157655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:09:19.144685  157655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:09:19.145987  157655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:09:19.147209  157655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:09:19.148652  157655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:09:19.168662  157655 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:09:19.168776  157655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:19.220942  157655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 21:09:19.213097228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:19.221041  157655 docker.go:295] overlay module found
	I0108 21:09:19.222971  157655 out.go:177] * Using the docker driver based on user configuration
	I0108 21:09:19.224227  157655 start.go:298] selected driver: docker
	I0108 21:09:19.224241  157655 start.go:902] validating driver "docker" against <nil>
	I0108 21:09:19.224257  157655 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:09:19.225084  157655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:19.276119  157655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 21:09:19.267717946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:19.276357  157655 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:09:19.276592  157655 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:09:19.278470  157655 out.go:177] * Using Docker driver with root privileges
	I0108 21:09:19.280210  157655 cni.go:84] Creating CNI manager for ""
	I0108 21:09:19.280241  157655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 21:09:19.280258  157655 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:09:19.280276  157655 start_flags.go:321] config:
	{Name:addons-954584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-954584 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:09:19.281994  157655 out.go:177] * Starting control plane node addons-954584 in cluster addons-954584
	I0108 21:09:19.283233  157655 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 21:09:19.284552  157655 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 21:09:19.285890  157655 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:09:19.285930  157655 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:09:19.285937  157655 cache.go:56] Caching tarball of preloaded images
	I0108 21:09:19.285995  157655 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 21:09:19.286036  157655 preload.go:174] Found /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:09:19.286050  157655 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:09:19.286430  157655 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/config.json ...
	I0108 21:09:19.286452  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/config.json: {Name:mk1d5cfb4f5615249c7134323664ece7dd5874c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:19.301391  157655 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d to local cache
	I0108 21:09:19.301520  157655 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory
	I0108 21:09:19.301539  157655 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory, skipping pull
	I0108 21:09:19.301545  157655 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in cache, skipping pull
	I0108 21:09:19.301560  157655 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d as a tarball
	I0108 21:09:19.301570  157655 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d from local cache
	I0108 21:09:30.098315  157655 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d from cached tarball
	I0108 21:09:30.098364  157655 cache.go:194] Successfully downloaded all kic artifacts
	I0108 21:09:30.098421  157655 start.go:365] acquiring machines lock for addons-954584: {Name:mk6a5ca84691f133d35cf8c78fc0a075a3eb2086 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:09:30.098586  157655 start.go:369] acquired machines lock for "addons-954584" in 134.311µs
	I0108 21:09:30.098625  157655 start.go:93] Provisioning new machine with config: &{Name:addons-954584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-954584 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:09:30.098745  157655 start.go:125] createHost starting for "" (driver="docker")
	I0108 21:09:30.100507  157655 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0108 21:09:30.100843  157655 start.go:159] libmachine.API.Create for "addons-954584" (driver="docker")
	I0108 21:09:30.100880  157655 client.go:168] LocalClient.Create starting
	I0108 21:09:30.101007  157655 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem
	I0108 21:09:30.397622  157655 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem
	I0108 21:09:30.513438  157655 cli_runner.go:164] Run: docker network inspect addons-954584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 21:09:30.528710  157655 cli_runner.go:211] docker network inspect addons-954584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 21:09:30.528776  157655 network_create.go:281] running [docker network inspect addons-954584] to gather additional debugging logs...
	I0108 21:09:30.528801  157655 cli_runner.go:164] Run: docker network inspect addons-954584
	W0108 21:09:30.543052  157655 cli_runner.go:211] docker network inspect addons-954584 returned with exit code 1
	I0108 21:09:30.543082  157655 network_create.go:284] error running [docker network inspect addons-954584]: docker network inspect addons-954584: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-954584 not found
	I0108 21:09:30.543102  157655 network_create.go:286] output of [docker network inspect addons-954584]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-954584 not found
	
	** /stderr **
	I0108 21:09:30.543214  157655 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:09:30.557572  157655 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002922680}
	I0108 21:09:30.557612  157655 network_create.go:124] attempt to create docker network addons-954584 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 21:09:30.557661  157655 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-954584 addons-954584
	I0108 21:09:30.607812  157655 network_create.go:108] docker network addons-954584 192.168.49.0/24 created
	I0108 21:09:30.607846  157655 kic.go:121] calculated static IP "192.168.49.2" for the "addons-954584" container
	I0108 21:09:30.607921  157655 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:09:30.623066  157655 cli_runner.go:164] Run: docker volume create addons-954584 --label name.minikube.sigs.k8s.io=addons-954584 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:09:30.638635  157655 oci.go:103] Successfully created a docker volume addons-954584
	I0108 21:09:30.638715  157655 cli_runner.go:164] Run: docker run --rm --name addons-954584-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-954584 --entrypoint /usr/bin/test -v addons-954584:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib
	I0108 21:09:37.761077  157655 cli_runner.go:217] Completed: docker run --rm --name addons-954584-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-954584 --entrypoint /usr/bin/test -v addons-954584:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib: (7.122311836s)
	I0108 21:09:37.761112  157655 oci.go:107] Successfully prepared a docker volume addons-954584
	I0108 21:09:37.761145  157655 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:09:37.761167  157655 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 21:09:37.761221  157655 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-954584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 21:09:42.831374  157655 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-954584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir: (5.070105604s)
	I0108 21:09:42.831408  157655 kic.go:203] duration metric: took 5.070237 seconds to extract preloaded images to volume
	W0108 21:09:42.831549  157655 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:09:42.831649  157655 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:09:42.884660  157655 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-954584 --name addons-954584 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-954584 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-954584 --network addons-954584 --ip 192.168.49.2 --volume addons-954584:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 21:09:43.194488  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Running}}
	I0108 21:09:43.212622  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:09:43.229374  157655 cli_runner.go:164] Run: docker exec addons-954584 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:09:43.293109  157655 oci.go:144] the created container "addons-954584" has a running status.
	I0108 21:09:43.293143  157655 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa...
	I0108 21:09:43.452257  157655 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:09:43.472639  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:09:43.492985  157655 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:09:43.493006  157655 kic_runner.go:114] Args: [docker exec --privileged addons-954584 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:09:43.554331  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:09:43.570985  157655 machine.go:88] provisioning docker machine ...
	I0108 21:09:43.571022  157655 ubuntu.go:169] provisioning hostname "addons-954584"
	I0108 21:09:43.571086  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:09:43.587489  157655 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:43.588067  157655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0108 21:09:43.588092  157655 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-954584 && echo "addons-954584" | sudo tee /etc/hostname
	I0108 21:09:43.589588  157655 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43862->127.0.0.1:32772: read: connection reset by peer
	I0108 21:09:46.735141  157655 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-954584
	
	I0108 21:09:46.735240  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:09:46.750344  157655 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:46.750774  157655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0108 21:09:46.750803  157655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-954584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-954584/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-954584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:09:46.889162  157655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:09:46.889191  157655 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-150013/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-150013/.minikube}
	I0108 21:09:46.889208  157655 ubuntu.go:177] setting up certificates
	I0108 21:09:46.889220  157655 provision.go:83] configureAuth start
	I0108 21:09:46.889290  157655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-954584
	I0108 21:09:46.904233  157655 provision.go:138] copyHostCerts
	I0108 21:09:46.904313  157655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem (1675 bytes)
	I0108 21:09:46.904470  157655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem (1078 bytes)
	I0108 21:09:46.904543  157655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem (1123 bytes)
	I0108 21:09:46.904600  157655 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem org=jenkins.addons-954584 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-954584]
	I0108 21:09:47.058329  157655 provision.go:172] copyRemoteCerts
	I0108 21:09:47.058397  157655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:09:47.058431  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:09:47.074191  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:09:47.169691  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:09:47.190232  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:09:47.210733  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 21:09:47.231022  157655 provision.go:86] duration metric: configureAuth took 341.789764ms
	I0108 21:09:47.231046  157655 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:09:47.231237  157655 config.go:182] Loaded profile config "addons-954584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:09:47.231359  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:09:47.247211  157655 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:47.247547  157655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0108 21:09:47.247566  157655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:09:47.465869  157655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:09:47.465901  157655 machine.go:91] provisioned docker machine in 3.894890321s
	I0108 21:09:47.465915  157655 client.go:171] LocalClient.Create took 17.365024s
	I0108 21:09:47.465943  157655 start.go:167] duration metric: libmachine.API.Create for "addons-954584" took 17.365099398s
	I0108 21:09:47.465958  157655 start.go:300] post-start starting for "addons-954584" (driver="docker")
	I0108 21:09:47.465978  157655 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:09:47.466058  157655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:09:47.466111  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:09:47.483246  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:09:47.581606  157655 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:09:47.584296  157655 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:09:47.584329  157655 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:09:47.584337  157655 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:09:47.584345  157655 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 21:09:47.584354  157655 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/addons for local assets ...
	I0108 21:09:47.584396  157655 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/files for local assets ...
	I0108 21:09:47.584418  157655 start.go:303] post-start completed in 118.448551ms
	I0108 21:09:47.584658  157655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-954584
	I0108 21:09:47.601220  157655 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/config.json ...
	I0108 21:09:47.601436  157655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:09:47.601497  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:09:47.616453  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:09:47.714191  157655 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:09:47.718287  157655 start.go:128] duration metric: createHost completed in 17.619524931s
	I0108 21:09:47.718312  157655 start.go:83] releasing machines lock for "addons-954584", held for 17.619702745s
	I0108 21:09:47.718386  157655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-954584
	I0108 21:09:47.734206  157655 ssh_runner.go:195] Run: cat /version.json
	I0108 21:09:47.734253  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:09:47.734254  157655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:09:47.734321  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:09:47.750169  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:09:47.751692  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:09:47.928101  157655 ssh_runner.go:195] Run: systemctl --version
	I0108 21:09:47.932102  157655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:09:48.067787  157655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:09:48.071958  157655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:09:48.089231  157655 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 21:09:48.089314  157655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:09:48.114770  157655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 21:09:48.114795  157655 start.go:475] detecting cgroup driver to use...
	I0108 21:09:48.114832  157655 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 21:09:48.114893  157655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:09:48.127789  157655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:09:48.137250  157655 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:09:48.137308  157655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:09:48.148670  157655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:09:48.160657  157655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:09:48.239236  157655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:09:48.322132  157655 docker.go:219] disabling docker service ...
	I0108 21:09:48.322188  157655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:09:48.338893  157655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:09:48.348637  157655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:09:48.422357  157655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:09:48.498857  157655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:09:48.508592  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:09:48.522117  157655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:09:48.522188  157655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:09:48.530295  157655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:09:48.530343  157655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:09:48.538257  157655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:09:48.546057  157655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:09:48.554011  157655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:09:48.561328  157655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:09:48.568006  157655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:09:48.574744  157655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:09:48.649415  157655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:09:48.762900  157655 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:09:48.763032  157655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:09:48.766563  157655 start.go:543] Will wait 60s for crictl version
	I0108 21:09:48.766616  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:09:48.769555  157655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:09:48.800314  157655 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 21:09:48.800393  157655 ssh_runner.go:195] Run: crio --version
	I0108 21:09:48.832858  157655 ssh_runner.go:195] Run: crio --version
	I0108 21:09:48.866506  157655 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 21:09:48.867818  157655 cli_runner.go:164] Run: docker network inspect addons-954584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:09:48.883493  157655 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 21:09:48.886935  157655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:09:48.896650  157655 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:09:48.896697  157655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:09:48.951342  157655 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:09:48.951370  157655 crio.go:415] Images already preloaded, skipping extraction
	I0108 21:09:48.951426  157655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:09:48.981570  157655 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:09:48.981599  157655 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:09:48.981677  157655 ssh_runner.go:195] Run: crio config
	I0108 21:09:49.022920  157655 cni.go:84] Creating CNI manager for ""
	I0108 21:09:49.022945  157655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 21:09:49.022970  157655 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:09:49.022997  157655 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-954584 NodeName:addons-954584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:09:49.023175  157655 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-954584"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:09:49.023279  157655 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-954584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-954584 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:09:49.023343  157655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:09:49.031313  157655 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:09:49.031392  157655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:09:49.038927  157655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0108 21:09:49.053744  157655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:09:49.068631  157655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0108 21:09:49.082919  157655 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:09:49.085783  157655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:09:49.094653  157655 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584 for IP: 192.168.49.2
	I0108 21:09:49.094688  157655 certs.go:190] acquiring lock for shared ca certs: {Name:mk66e763e1c1c88a577c7e7f60df668cab98f63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.094810  157655 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key
	I0108 21:09:49.230584  157655 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt ...
	I0108 21:09:49.230623  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt: {Name:mka0cff655fd31cca6f7ff920cc88e9be6080611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.230858  157655 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key ...
	I0108 21:09:49.230875  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key: {Name:mk555b296575687cc70d554276bdcebb8f661cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.230977  157655 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key
	I0108 21:09:49.546661  157655 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt ...
	I0108 21:09:49.546695  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt: {Name:mkfbb2dd681ab959ccce8c39d424b770f1b73ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.546895  157655 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key ...
	I0108 21:09:49.546916  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key: {Name:mkbe3f71f0bf8d18530f89712b88b165e4370b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.547064  157655 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.key
	I0108 21:09:49.547086  157655 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt with IP's: []
	I0108 21:09:49.837742  157655 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt ...
	I0108 21:09:49.837776  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: {Name:mk5313eec3704a773441ae96ce419a03344a2155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.837968  157655 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.key ...
	I0108 21:09:49.837988  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.key: {Name:mkfe49cc05aa3063d63900203656d9a4b083f63f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.838094  157655 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.key.dd3b5fb2
	I0108 21:09:49.838120  157655 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:09:49.931133  157655 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.crt.dd3b5fb2 ...
	I0108 21:09:49.931165  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.crt.dd3b5fb2: {Name:mkff2b8c3d0a544b3afd455cefe5fdd908c27492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.931349  157655 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.key.dd3b5fb2 ...
	I0108 21:09:49.931370  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.key.dd3b5fb2: {Name:mkc0ad0e9f908eecaa08ba97e8fe29df220bd3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:49.931463  157655 certs.go:337] copying /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.crt
	I0108 21:09:49.931629  157655 certs.go:341] copying /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.key
	I0108 21:09:49.931707  157655 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/proxy-client.key
	I0108 21:09:49.931740  157655 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/proxy-client.crt with IP's: []
	I0108 21:09:50.005358  157655 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/proxy-client.crt ...
	I0108 21:09:50.005394  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/proxy-client.crt: {Name:mkb5e2ade4267eb386458b2858f037f79e8c82e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:50.005588  157655 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/proxy-client.key ...
	I0108 21:09:50.005611  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/proxy-client.key: {Name:mk573d5637ab37559701edc1336b59b1cf18dbc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:50.005846  157655 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem (1679 bytes)
	I0108 21:09:50.005893  157655 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:09:50.005933  157655 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:09:50.005969  157655 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem (1675 bytes)
	I0108 21:09:50.006665  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:09:50.027949  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:09:50.047655  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:09:50.067434  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:09:50.087500  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:09:50.107214  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:09:50.126880  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:09:50.146248  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:09:50.166331  157655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:09:50.186614  157655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:09:50.201577  157655 ssh_runner.go:195] Run: openssl version
	I0108 21:09:50.206204  157655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:09:50.214128  157655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:09:50.217064  157655 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:09:50.217125  157655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:09:50.223136  157655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:09:50.230909  157655 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:09:50.233678  157655 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:09:50.233715  157655 kubeadm.go:404] StartCluster: {Name:addons-954584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-954584 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:09:50.233809  157655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:09:50.233843  157655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:09:50.265644  157655 cri.go:89] found id: ""
	I0108 21:09:50.265718  157655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:09:50.273486  157655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:09:50.280963  157655 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:09:50.281026  157655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:09:50.288396  157655 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:09:50.288436  157655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:09:50.364418  157655 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 21:09:50.423867  157655 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:09:59.380642  157655 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 21:09:59.380734  157655 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:09:59.380830  157655 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:09:59.380912  157655 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 21:09:59.380954  157655 kubeadm.go:322] OS: Linux
	I0108 21:09:59.381018  157655 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 21:09:59.381060  157655 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 21:09:59.381129  157655 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 21:09:59.381213  157655 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 21:09:59.381276  157655 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 21:09:59.381340  157655 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 21:09:59.381414  157655 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 21:09:59.381519  157655 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 21:09:59.381561  157655 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 21:09:59.381624  157655 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:09:59.381715  157655 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:09:59.381803  157655 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:09:59.381873  157655 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:09:59.383548  157655 out.go:204]   - Generating certificates and keys ...
	I0108 21:09:59.383636  157655 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:09:59.383713  157655 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:09:59.383788  157655 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:09:59.383849  157655 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:09:59.383909  157655 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:09:59.383979  157655 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:09:59.384053  157655 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:09:59.384189  157655 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-954584 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 21:09:59.384263  157655 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:09:59.384389  157655 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-954584 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 21:09:59.384484  157655 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:09:59.384572  157655 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:09:59.384624  157655 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:09:59.384687  157655 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:09:59.384751  157655 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:09:59.384831  157655 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:09:59.384922  157655 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:09:59.385004  157655 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:09:59.385109  157655 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:09:59.385197  157655 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:09:59.386781  157655 out.go:204]   - Booting up control plane ...
	I0108 21:09:59.386890  157655 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:09:59.386983  157655 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:09:59.387064  157655 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:09:59.387204  157655 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:09:59.387323  157655 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:09:59.387380  157655 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:09:59.387588  157655 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:09:59.387694  157655 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001938 seconds
	I0108 21:09:59.387826  157655 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:09:59.387935  157655 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:09:59.388010  157655 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:09:59.388177  157655 kubeadm.go:322] [mark-control-plane] Marking the node addons-954584 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:09:59.388251  157655 kubeadm.go:322] [bootstrap-token] Using token: wc8rjk.l7a4pjqyo4q7iuao
	I0108 21:09:59.389492  157655 out.go:204]   - Configuring RBAC rules ...
	I0108 21:09:59.389597  157655 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:09:59.389692  157655 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:09:59.389847  157655 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:09:59.389982  157655 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:09:59.390090  157655 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:09:59.390207  157655 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:09:59.390312  157655 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:09:59.390392  157655 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:09:59.390466  157655 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:09:59.390477  157655 kubeadm.go:322] 
	I0108 21:09:59.390566  157655 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:09:59.390580  157655 kubeadm.go:322] 
	I0108 21:09:59.390683  157655 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:09:59.390696  157655 kubeadm.go:322] 
	I0108 21:09:59.390731  157655 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:09:59.390819  157655 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:09:59.390894  157655 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:09:59.390906  157655 kubeadm.go:322] 
	I0108 21:09:59.390984  157655 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 21:09:59.390993  157655 kubeadm.go:322] 
	I0108 21:09:59.391033  157655 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:09:59.391039  157655 kubeadm.go:322] 
	I0108 21:09:59.391113  157655 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:09:59.391218  157655 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:09:59.391308  157655 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:09:59.391317  157655 kubeadm.go:322] 
	I0108 21:09:59.391425  157655 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:09:59.391499  157655 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:09:59.391505  157655 kubeadm.go:322] 
	I0108 21:09:59.391581  157655 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wc8rjk.l7a4pjqyo4q7iuao \
	I0108 21:09:59.391705  157655 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 \
	I0108 21:09:59.391725  157655 kubeadm.go:322] 	--control-plane 
	I0108 21:09:59.391731  157655 kubeadm.go:322] 
	I0108 21:09:59.391800  157655 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:09:59.391806  157655 kubeadm.go:322] 
	I0108 21:09:59.391900  157655 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wc8rjk.l7a4pjqyo4q7iuao \
	I0108 21:09:59.392027  157655 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 
	I0108 21:09:59.392057  157655 cni.go:84] Creating CNI manager for ""
	I0108 21:09:59.392071  157655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 21:09:59.393378  157655 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:09:59.394512  157655 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:09:59.416714  157655 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:09:59.416733  157655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:09:59.433611  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:10:00.062714  157655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:10:00.062812  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=addons-954584 minikube.k8s.io/updated_at=2024_01_08T21_10_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:00.062812  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:00.134648  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:00.146138  157655 ops.go:34] apiserver oom_adj: -16
	I0108 21:10:00.634872  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:01.134822  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:01.635225  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:02.135385  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:02.635489  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:03.134934  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:03.634710  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:04.135317  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:04.635304  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:05.135304  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:05.635553  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:06.135436  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:06.634928  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:07.134746  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:07.634685  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:08.135454  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:08.635419  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:09.135504  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:09.634736  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:10.134951  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:10.635514  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:11.135522  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:11.635577  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:12.134707  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:12.634679  157655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:10:12.717411  157655 kubeadm.go:1088] duration metric: took 12.654664249s to wait for elevateKubeSystemPrivileges.
	I0108 21:10:12.717522  157655 kubeadm.go:406] StartCluster complete in 22.483808609s
	I0108 21:10:12.717558  157655 settings.go:142] acquiring lock: {Name:mka49c6122422560714ade880e41fa20698ed59b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:10:12.717711  157655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:10:12.718295  157655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/kubeconfig: {Name:mk7bacc6ac7c9afd0d9363f33909f58b6056dc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:10:12.718652  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:10:12.718940  157655 config.go:182] Loaded profile config "addons-954584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:10:12.719063  157655 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 21:10:12.719156  157655 addons.go:69] Setting yakd=true in profile "addons-954584"
	I0108 21:10:12.719174  157655 addons.go:237] Setting addon yakd=true in "addons-954584"
	I0108 21:10:12.719232  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.719584  157655 addons.go:69] Setting ingress-dns=true in profile "addons-954584"
	I0108 21:10:12.719604  157655 addons.go:237] Setting addon ingress-dns=true in "addons-954584"
	I0108 21:10:12.719657  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.719801  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.719956  157655 addons.go:69] Setting registry=true in profile "addons-954584"
	I0108 21:10:12.720008  157655 addons.go:69] Setting storage-provisioner=true in profile "addons-954584"
	I0108 21:10:12.720025  157655 addons.go:237] Setting addon registry=true in "addons-954584"
	I0108 21:10:12.720030  157655 addons.go:237] Setting addon storage-provisioner=true in "addons-954584"
	I0108 21:10:12.720032  157655 addons.go:69] Setting default-storageclass=true in profile "addons-954584"
	I0108 21:10:12.720074  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.720078  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.720092  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.720114  157655 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-954584"
	I0108 21:10:12.720395  157655 addons.go:69] Setting volumesnapshots=true in profile "addons-954584"
	I0108 21:10:12.720410  157655 addons.go:237] Setting addon volumesnapshots=true in "addons-954584"
	I0108 21:10:12.720424  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.720447  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.720541  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.720815  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.720877  157655 addons.go:69] Setting metrics-server=true in profile "addons-954584"
	I0108 21:10:12.720923  157655 addons.go:237] Setting addon metrics-server=true in "addons-954584"
	I0108 21:10:12.720976  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.721166  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.721440  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.721480  157655 addons.go:69] Setting gcp-auth=true in profile "addons-954584"
	I0108 21:10:12.721504  157655 mustload.go:65] Loading cluster: addons-954584
	I0108 21:10:12.721719  157655 config.go:182] Loaded profile config "addons-954584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:10:12.721961  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.719984  157655 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-954584"
	I0108 21:10:12.722598  157655 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-954584"
	I0108 21:10:12.723452  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.725884  157655 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-954584"
	I0108 21:10:12.725916  157655 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-954584"
	I0108 21:10:12.725963  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.726099  157655 addons.go:69] Setting cloud-spanner=true in profile "addons-954584"
	I0108 21:10:12.726151  157655 addons.go:237] Setting addon cloud-spanner=true in "addons-954584"
	I0108 21:10:12.726209  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.726455  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.726660  157655 addons.go:69] Setting inspektor-gadget=true in profile "addons-954584"
	I0108 21:10:12.726675  157655 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-954584"
	I0108 21:10:12.726686  157655 addons.go:237] Setting addon inspektor-gadget=true in "addons-954584"
	I0108 21:10:12.726715  157655 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-954584"
	I0108 21:10:12.726724  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.726756  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.727147  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.727195  157655 addons.go:69] Setting ingress=true in profile "addons-954584"
	I0108 21:10:12.727218  157655 addons.go:237] Setting addon ingress=true in "addons-954584"
	I0108 21:10:12.727268  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.726669  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.727560  157655 addons.go:69] Setting helm-tiller=true in profile "addons-954584"
	I0108 21:10:12.727597  157655 addons.go:237] Setting addon helm-tiller=true in "addons-954584"
	I0108 21:10:12.727641  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.727147  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.755175  157655 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 21:10:12.756742  157655 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 21:10:12.761422  157655 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 21:10:12.761576  157655 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 21:10:12.761596  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 21:10:12.763809  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.761542  157655 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:10:12.761504  157655 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 21:10:12.763505  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.763732  157655 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:10:12.765751  157655 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:10:12.766112  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.766418  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.766722  157655 addons.go:237] Setting addon default-storageclass=true in "addons-954584"
	I0108 21:10:12.767355  157655 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 21:10:12.767776  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 21:10:12.767799  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:10:12.767808  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:10:12.770385  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.770617  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.771168  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.778222  157655 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 21:10:12.778247  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 21:10:12.778310  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.780351  157655 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 21:10:12.781647  157655 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 21:10:12.781666  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 21:10:12.781721  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.783369  157655 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 21:10:12.783388  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 21:10:12.783440  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.773203  157655 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-954584"
	I0108 21:10:12.783667  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:12.784244  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:12.779510  157655 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 21:10:12.784556  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 21:10:12.784614  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.771433  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.790145  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.793291  157655 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 21:10:12.794677  157655 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 21:10:12.794697  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 21:10:12.794751  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.807012  157655 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 21:10:12.808339  157655 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 21:10:12.808360  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 21:10:12.808424  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.824249  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 21:10:12.823688  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.826446  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 21:10:12.828169  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 21:10:12.829438  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 21:10:12.832138  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 21:10:12.833409  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 21:10:12.834963  157655 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 21:10:12.834944  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 21:10:12.836580  157655 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 21:10:12.838914  157655 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 21:10:12.838251  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.838863  157655 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 21:10:12.839269  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.841214  157655 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 21:10:12.840045  157655 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 21:10:12.843160  157655 out.go:177]   - Using image docker.io/busybox:stable
	I0108 21:10:12.844734  157655 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 21:10:12.844751  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 21:10:12.844801  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.843476  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 21:10:12.845360  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.843657  157655 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 21:10:12.845594  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 21:10:12.845650  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.845847  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.848915  157655 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 21:10:12.847172  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.850809  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.851311  157655 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 21:10:12.851341  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 21:10:12.851390  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.853509  157655 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:10:12.853523  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:10:12.853564  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:12.863233  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.875826  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:10:12.886186  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.887698  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.888418  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.888982  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.891648  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:12.892428  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	W0108 21:10:12.916077  157655 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0108 21:10:12.916130  157655 retry.go:31] will retry after 331.524878ms: ssh: handshake failed: EOF
	I0108 21:10:13.123941  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 21:10:13.127383  157655 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 21:10:13.127459  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 21:10:13.131684  157655 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:10:13.131709  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 21:10:13.234399  157655 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-954584" context rescaled to 1 replicas
	I0108 21:10:13.234452  157655 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:10:13.236312  157655 out.go:177] * Verifying Kubernetes components...
	I0108 21:10:13.237843  157655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:10:13.316479  157655 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 21:10:13.316564  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 21:10:13.320470  157655 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:10:13.320546  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:10:13.321695  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 21:10:13.324293  157655 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 21:10:13.324353  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 21:10:13.327551  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:10:13.422337  157655 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 21:10:13.422439  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 21:10:13.424010  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 21:10:13.427383  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:10:13.524639  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 21:10:13.615868  157655 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 21:10:13.615957  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 21:10:13.616339  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 21:10:13.616840  157655 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 21:10:13.616903  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 21:10:13.621860  157655 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 21:10:13.621882  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 21:10:13.733292  157655 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 21:10:13.733387  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 21:10:13.815599  157655 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:10:13.815688  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:10:13.818396  157655 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 21:10:13.818465  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 21:10:14.030419  157655 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 21:10:14.030448  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 21:10:14.034477  157655 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 21:10:14.034501  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 21:10:14.118713  157655 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 21:10:14.118839  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 21:10:14.122977  157655 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 21:10:14.123045  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 21:10:14.216516  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 21:10:14.218699  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:10:14.314257  157655 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 21:10:14.314362  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 21:10:14.325235  157655 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 21:10:14.325267  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 21:10:14.415067  157655 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 21:10:14.415151  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 21:10:14.514979  157655 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 21:10:14.515012  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 21:10:14.527492  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 21:10:14.534194  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 21:10:14.719662  157655 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 21:10:14.719741  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 21:10:14.819294  157655 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 21:10:14.819378  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 21:10:14.823050  157655 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.947184449s)
	I0108 21:10:14.823133  157655 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 21:10:14.827328  157655 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 21:10:14.827392  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 21:10:15.015181  157655 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 21:10:15.015282  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 21:10:15.031820  157655 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 21:10:15.031903  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 21:10:15.032664  157655 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 21:10:15.032724  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 21:10:15.227428  157655 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 21:10:15.227512  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 21:10:15.236521  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.112525365s)
	I0108 21:10:15.236554  157655 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.998667342s)
	I0108 21:10:15.237695  157655 node_ready.go:35] waiting up to 6m0s for node "addons-954584" to be "Ready" ...
	I0108 21:10:15.315372  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 21:10:15.316207  157655 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 21:10:15.316268  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 21:10:15.621472  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 21:10:15.822719  157655 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 21:10:15.822809  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 21:10:16.315383  157655 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 21:10:16.315468  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 21:10:16.534455  157655 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 21:10:16.534541  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 21:10:16.928180  157655 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 21:10:16.928213  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 21:10:17.228410  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.906650381s)
	I0108 21:10:17.327088  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:17.330208  157655 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 21:10:17.330278  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 21:10:17.633491  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 21:10:17.921711  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.594069492s)
	I0108 21:10:17.921809  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.494395104s)
	I0108 21:10:17.921839  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.497695881s)
	I0108 21:10:18.640211  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.11545192s)
	I0108 21:10:18.640396  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.023960549s)
	I0108 21:10:18.640710  157655 addons.go:473] Verifying addon registry=true in "addons-954584"
	I0108 21:10:18.642434  157655 out.go:177] * Verifying registry addon...
	I0108 21:10:18.644421  157655 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 21:10:18.723413  157655 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 21:10:18.723508  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:19.148156  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:19.535803  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.319182633s)
	I0108 21:10:19.535848  157655 addons.go:473] Verifying addon ingress=true in "addons-954584"
	I0108 21:10:19.535895  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.317116066s)
	I0108 21:10:19.537403  157655 out.go:177] * Verifying ingress addon...
	I0108 21:10:19.535928  157655 addons.go:473] Verifying addon metrics-server=true in "addons-954584"
	I0108 21:10:19.535935  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.008404296s)
	I0108 21:10:19.535983  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.001704622s)
	I0108 21:10:19.536094  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.220623143s)
	I0108 21:10:19.536166  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.914586338s)
	I0108 21:10:19.540010  157655 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-954584 service yakd-dashboard -n yakd-dashboard
	
	
	W0108 21:10:19.538647  157655 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 21:10:19.539286  157655 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 21:10:19.541269  157655 retry.go:31] will retry after 186.214704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 21:10:19.544158  157655 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 21:10:19.544171  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:19.574143  157655 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 21:10:19.574199  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:19.591269  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:19.648257  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:19.723643  157655 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 21:10:19.727815  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 21:10:19.741240  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:19.743288  157655 addons.go:237] Setting addon gcp-auth=true in "addons-954584"
	I0108 21:10:19.743356  157655 host.go:66] Checking if "addons-954584" exists ...
	I0108 21:10:19.743867  157655 cli_runner.go:164] Run: docker container inspect addons-954584 --format={{.State.Status}}
	I0108 21:10:19.764522  157655 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 21:10:19.764590  157655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-954584
	I0108 21:10:19.780659  157655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/addons-954584/id_rsa Username:docker}
	I0108 21:10:20.045794  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:20.148786  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:20.464579  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.830981592s)
	I0108 21:10:20.464613  157655 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-954584"
	I0108 21:10:20.466357  157655 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 21:10:20.468182  157655 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 21:10:20.471170  157655 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 21:10:20.471187  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:20.545401  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:20.649061  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:20.834233  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.106373188s)
	I0108 21:10:20.834312  157655 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.069752236s)
	I0108 21:10:20.835937  157655 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 21:10:20.837346  157655 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 21:10:20.838512  157655 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 21:10:20.838533  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 21:10:20.855766  157655 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 21:10:20.855789  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 21:10:20.872301  157655 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 21:10:20.872320  157655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 21:10:20.889711  157655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 21:10:20.972499  157655 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 21:10:20.972525  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:21.046171  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:21.149776  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:21.518118  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:21.545927  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:21.649212  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:21.742520  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:22.020142  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:22.122222  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:22.125391  157655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.235638185s)
	I0108 21:10:22.126660  157655 addons.go:473] Verifying addon gcp-auth=true in "addons-954584"
	I0108 21:10:22.129332  157655 out.go:177] * Verifying gcp-auth addon...
	I0108 21:10:22.131637  157655 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 21:10:22.139702  157655 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 21:10:22.139725  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:22.217195  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:22.518089  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:22.617865  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:22.635791  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:22.718500  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:23.018703  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:23.046185  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:23.136142  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:23.149275  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:23.516977  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:23.545841  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:23.635020  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:23.648510  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:23.973116  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:24.046003  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:24.135876  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:24.148518  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:24.241279  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:24.472316  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:24.545816  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:24.635824  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:24.648219  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:24.973517  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:25.046060  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:25.135660  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:25.148779  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:25.472348  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:25.545753  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:25.634911  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:25.647894  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:25.971593  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:26.045693  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:26.135524  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:26.148807  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:26.241572  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:26.472043  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:26.545215  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:26.635328  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:26.648365  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:26.973171  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:27.045273  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:27.135356  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:27.149404  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:27.472968  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:27.545788  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:27.634593  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:27.650260  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:27.972691  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:28.044723  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:28.135155  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:28.148345  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:28.473050  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:28.545499  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:28.634810  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:28.648029  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:28.740423  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:28.972603  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:29.045723  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:29.135900  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:29.161828  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:29.472114  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:29.545374  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:29.634859  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:29.647728  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:29.972057  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:30.045306  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:30.134697  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:30.148544  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:30.472410  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:30.545634  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:30.634996  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:30.648001  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:30.740720  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:30.972700  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:31.045618  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:31.135119  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:31.147869  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:31.472290  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:31.545472  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:31.634583  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:31.648664  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:31.973225  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:32.045581  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:32.134995  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:32.148050  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:32.472610  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:32.544755  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:32.635195  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:32.648424  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:32.741033  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:32.973233  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:33.045607  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:33.134650  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:33.148546  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:33.473179  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:33.545473  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:33.634855  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:33.647941  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:33.972404  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:34.045391  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:34.134887  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:34.148124  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:34.472922  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:34.545214  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:34.635711  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:34.648689  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:34.741334  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:34.974016  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:35.045024  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:35.135680  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:35.148825  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:35.473184  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:35.545247  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:35.635703  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:35.647503  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:35.975140  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:36.045262  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:36.135251  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:36.148427  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:36.472954  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:36.545223  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:36.635658  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:36.648700  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:36.973553  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:37.045405  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:37.134952  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:37.148074  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:37.241155  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:37.472997  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:37.545071  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:37.635088  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:37.647584  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:37.972967  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:38.044932  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:38.135461  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:38.148378  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:38.472897  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:38.545190  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:38.635808  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:38.648899  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:38.972463  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:39.045727  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:39.135453  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:39.148415  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:39.472864  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:39.545131  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:39.635753  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:39.648725  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:39.741288  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:39.974947  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:40.045275  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:40.135650  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:40.148665  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:40.472968  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:40.545088  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:40.635917  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:40.647798  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:40.973158  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:41.045082  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:41.135689  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:41.149081  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:41.472435  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:41.545419  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:41.634791  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:41.647856  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:41.741745  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:41.972478  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:42.045389  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:42.134889  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:42.147997  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:42.472885  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:42.544944  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:42.635483  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:42.648582  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:42.972036  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:43.045048  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:43.135724  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:43.148673  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:43.472151  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:43.545213  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:43.635429  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:43.648489  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:43.975189  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:44.045213  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:44.135495  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:44.148794  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:44.241548  157655 node_ready.go:58] node "addons-954584" has status "Ready":"False"
	I0108 21:10:44.472164  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:44.545356  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:44.634772  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:44.648433  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:44.973035  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:45.045187  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:45.135820  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:45.148091  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:45.472486  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:45.545642  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:45.635210  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:45.648588  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:46.019679  157655 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 21:10:46.019719  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:46.044728  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:46.217825  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:46.226105  157655 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 21:10:46.226131  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:46.316463  157655 node_ready.go:49] node "addons-954584" has status "Ready":"True"
	I0108 21:10:46.316491  157655 node_ready.go:38] duration metric: took 31.078774161s waiting for node "addons-954584" to be "Ready" ...
	I0108 21:10:46.316500  157655 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:10:46.324624  157655 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b4v8l" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:46.473796  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:46.545859  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:46.636081  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:46.649575  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:46.974452  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:47.045646  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:47.136974  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:47.148004  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:47.474303  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:47.545257  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:47.635644  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:47.648456  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:47.830815  157655 pod_ready.go:92] pod "coredns-5dd5756b68-b4v8l" in "kube-system" namespace has status "Ready":"True"
	I0108 21:10:47.830840  157655 pod_ready.go:81] duration metric: took 1.506125297s waiting for pod "coredns-5dd5756b68-b4v8l" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.830867  157655 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-954584" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.835254  157655 pod_ready.go:92] pod "etcd-addons-954584" in "kube-system" namespace has status "Ready":"True"
	I0108 21:10:47.835276  157655 pod_ready.go:81] duration metric: took 4.40163ms waiting for pod "etcd-addons-954584" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.835289  157655 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-954584" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.839614  157655 pod_ready.go:92] pod "kube-apiserver-addons-954584" in "kube-system" namespace has status "Ready":"True"
	I0108 21:10:47.839634  157655 pod_ready.go:81] duration metric: took 4.339129ms waiting for pod "kube-apiserver-addons-954584" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.839643  157655 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-954584" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.843730  157655 pod_ready.go:92] pod "kube-controller-manager-addons-954584" in "kube-system" namespace has status "Ready":"True"
	I0108 21:10:47.843749  157655 pod_ready.go:81] duration metric: took 4.099773ms waiting for pod "kube-controller-manager-addons-954584" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.843759  157655 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8dlx5" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.847904  157655 pod_ready.go:92] pod "kube-proxy-8dlx5" in "kube-system" namespace has status "Ready":"True"
	I0108 21:10:47.847927  157655 pod_ready.go:81] duration metric: took 4.160794ms waiting for pod "kube-proxy-8dlx5" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.847939  157655 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-954584" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:47.973964  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:48.044818  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:48.135341  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:48.149036  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:48.241644  157655 pod_ready.go:92] pod "kube-scheduler-addons-954584" in "kube-system" namespace has status "Ready":"True"
	I0108 21:10:48.241671  157655 pod_ready.go:81] duration metric: took 393.72317ms waiting for pod "kube-scheduler-addons-954584" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:48.241692  157655 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-5zm94" in "kube-system" namespace to be "Ready" ...
	I0108 21:10:48.474082  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:48.545902  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:48.635552  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:48.648233  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:48.974513  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:49.119134  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:49.137483  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:49.219359  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:49.519856  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:49.545815  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:49.638783  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:49.721198  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:50.019203  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:50.045807  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:50.136100  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:50.150602  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:50.247983  157655 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5zm94" in "kube-system" namespace has status "Ready":"False"
	I0108 21:10:50.474187  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:50.546030  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:50.635783  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:50.649961  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:50.974400  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:51.046547  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:51.135007  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:51.149191  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:51.474087  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:51.546529  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:51.635739  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:51.649611  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:51.975069  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:52.046780  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:52.135289  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:52.149886  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:52.248213  157655 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5zm94" in "kube-system" namespace has status "Ready":"False"
	I0108 21:10:52.474730  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:52.546530  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:52.635811  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:52.649708  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:52.973737  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:53.045598  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:53.135277  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:53.149245  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:53.519396  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:53.545705  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:53.635212  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:53.649088  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:54.019900  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:54.046306  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:54.136141  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:54.149702  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:54.518157  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:54.546849  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:54.635770  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:54.649864  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:54.748352  157655 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5zm94" in "kube-system" namespace has status "Ready":"False"
	I0108 21:10:54.976479  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:55.045599  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:55.135127  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:55.148969  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:55.474115  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:55.545492  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:55.635055  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:55.648862  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:55.973832  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:56.046468  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:56.135533  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:56.149095  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:56.473384  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:56.545340  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:56.634968  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:56.648890  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:56.975975  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:57.046361  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:57.136127  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:57.149943  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:57.321754  157655 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5zm94" in "kube-system" namespace has status "Ready":"False"
	I0108 21:10:57.519427  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:57.545669  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:57.634882  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:57.650039  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:57.974539  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:58.044880  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:58.135576  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:58.149181  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:58.473813  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:58.545384  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:58.634575  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:58.653105  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:58.976997  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:59.045755  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:59.135661  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:59.149372  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:59.474140  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:10:59.545966  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:10:59.636343  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:10:59.649047  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:10:59.749340  157655 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5zm94" in "kube-system" namespace has status "Ready":"False"
	I0108 21:10:59.978055  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:00.045365  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:00.134631  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:00.149158  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:00.474169  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:00.546415  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:00.635684  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:00.650297  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:00.974629  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:01.045737  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:01.134893  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:01.149278  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:01.474305  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:01.545133  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:01.635591  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:01.649703  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:01.747426  157655 pod_ready.go:92] pod "metrics-server-7c66d45ddc-5zm94" in "kube-system" namespace has status "Ready":"True"
	I0108 21:11:01.747449  157655 pod_ready.go:81] duration metric: took 13.505749708s waiting for pod "metrics-server-7c66d45ddc-5zm94" in "kube-system" namespace to be "Ready" ...
	I0108 21:11:01.747460  157655 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace to be "Ready" ...
	I0108 21:11:02.019016  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:02.047311  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:02.135683  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:02.150009  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:02.474145  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:02.545954  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:02.635603  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:02.650091  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:02.974926  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:03.045921  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:03.135428  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:03.148920  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:03.473949  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:03.545901  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:03.635492  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:03.649944  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:03.753879  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:03.974851  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:04.047045  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:04.135340  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:04.149203  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:04.474246  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:04.544860  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:04.635437  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:04.649105  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:04.974017  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:05.045769  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:05.135285  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:05.148966  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:05.473915  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:05.546057  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:05.638086  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:05.648526  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:05.975793  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:06.045856  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:06.135261  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:06.148957  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:06.253770  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:06.473549  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:06.545751  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:06.635015  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:06.649655  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:06.974656  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:07.045286  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:07.136766  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:07.149612  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:07.473216  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:07.544901  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:07.634898  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:07.648939  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:07.973933  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:08.045693  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:08.136001  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:08.149340  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:08.474526  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:08.545659  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:08.635503  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:08.650022  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:08.754602  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:08.975318  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:09.045380  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:09.135242  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:09.148670  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:09.473319  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:09.545413  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:09.635645  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:09.649263  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:09.975089  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:10.045885  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:10.135161  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:10.148563  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:10.473800  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:10.545582  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:10.635986  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:10.649575  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:11.018674  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:11.044989  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:11.136228  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:11.151099  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:11.258394  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:11.473826  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:11.546284  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:11.635975  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:11.648619  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:11.974402  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:12.045928  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:12.135343  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:12.149316  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:12.473950  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:12.546430  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:12.635572  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:12.649753  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:12.975265  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:13.045987  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:13.135423  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:13.150576  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:13.473528  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:13.548042  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:13.635822  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:13.650100  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:13.753304  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:14.015349  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:14.046066  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:14.135643  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:14.150526  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:14.474659  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:14.545425  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:14.635853  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:14.648362  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:14.975368  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:15.046545  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:15.136170  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:15.150209  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:15.474459  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:15.547030  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:15.635628  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:15.649318  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:15.753650  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:15.974402  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:16.045660  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:16.135969  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:16.148840  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:16.473574  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:16.545169  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:16.635637  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:16.649282  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:11:16.976034  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:17.046096  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:17.135310  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:17.149034  157655 kapi.go:107] duration metric: took 58.504610452s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 21:11:17.472809  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:17.545767  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:17.634646  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:17.976695  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:18.045522  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:18.135062  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:18.254174  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:18.518386  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:18.618361  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:18.636586  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:19.026857  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:19.119001  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:19.136618  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:19.518576  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:19.546406  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:19.635008  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:19.974247  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:20.046505  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:20.136423  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:20.254707  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:20.519703  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:20.545851  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:20.636841  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:21.018311  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:21.046081  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:21.135530  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:21.474147  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:21.546574  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:21.635480  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:21.976222  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:22.045437  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:22.135540  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:22.474932  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:22.546010  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:22.635266  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:22.754030  157655 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:22.976071  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:23.046293  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:23.135363  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:23.474632  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:23.546167  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:23.636269  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:24.040340  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:24.119349  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:24.136100  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:24.320995  157655 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace has status "Ready":"True"
	I0108 21:11:24.321026  157655 pod_ready.go:81] duration metric: took 22.573558222s waiting for pod "nvidia-device-plugin-daemonset-m7f7g" in "kube-system" namespace to be "Ready" ...
	I0108 21:11:24.321053  157655 pod_ready.go:38] duration metric: took 38.004537321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:11:24.321080  157655 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:11:24.321129  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:11:24.321193  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:11:24.441627  157655 cri.go:89] found id: "70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c"
	I0108 21:11:24.441715  157655 cri.go:89] found id: ""
	I0108 21:11:24.441737  157655 logs.go:284] 1 containers: [70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c]
	I0108 21:11:24.441815  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:24.515064  157655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:11:24.515190  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:11:24.521582  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:24.546390  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:24.636800  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:24.732146  157655 cri.go:89] found id: "6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576"
	I0108 21:11:24.732226  157655 cri.go:89] found id: ""
	I0108 21:11:24.732248  157655 logs.go:284] 1 containers: [6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576]
	I0108 21:11:24.732331  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:24.737381  157655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:11:24.737522  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:11:25.021996  157655 cri.go:89] found id: "ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1"
	I0108 21:11:25.023462  157655 cri.go:89] found id: ""
	I0108 21:11:25.023521  157655 logs.go:284] 1 containers: [ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1]
	I0108 21:11:25.023603  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:25.023424  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:25.028460  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:11:25.028557  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:11:25.122726  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:25.135554  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:25.232334  157655 cri.go:89] found id: "0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d"
	I0108 21:11:25.232425  157655 cri.go:89] found id: ""
	I0108 21:11:25.232443  157655 logs.go:284] 1 containers: [0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d]
	I0108 21:11:25.232503  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:25.314079  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:11:25.314223  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:11:25.444284  157655 cri.go:89] found id: "9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959"
	I0108 21:11:25.444362  157655 cri.go:89] found id: ""
	I0108 21:11:25.444388  157655 logs.go:284] 1 containers: [9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959]
	I0108 21:11:25.444462  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:25.518000  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:11:25.518082  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:11:25.519774  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:25.546089  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:25.614709  157655 cri.go:89] found id: "afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8"
	I0108 21:11:25.614735  157655 cri.go:89] found id: ""
	I0108 21:11:25.614745  157655 logs.go:284] 1 containers: [afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8]
	I0108 21:11:25.614808  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:25.618745  157655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:11:25.618821  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:11:25.638999  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:25.715982  157655 cri.go:89] found id: "372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709"
	I0108 21:11:25.716013  157655 cri.go:89] found id: ""
	I0108 21:11:25.716024  157655 logs.go:284] 1 containers: [372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709]
	I0108 21:11:25.716088  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:25.720089  157655 logs.go:123] Gathering logs for kubelet ...
	I0108 21:11:25.720117  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:11:25.806855  157655 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:11:25.806888  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:11:26.019274  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:26.046527  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:26.125898  157655 logs.go:123] Gathering logs for kube-apiserver [70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c] ...
	I0108 21:11:26.125929  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c"
	I0108 21:11:26.136061  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:26.241968  157655 logs.go:123] Gathering logs for etcd [6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576] ...
	I0108 21:11:26.242003  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576"
	I0108 21:11:26.338050  157655 logs.go:123] Gathering logs for kube-proxy [9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959] ...
	I0108 21:11:26.338085  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959"
	I0108 21:11:26.374262  157655 logs.go:123] Gathering logs for container status ...
	I0108 21:11:26.374293  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:11:26.445073  157655 logs.go:123] Gathering logs for dmesg ...
	I0108 21:11:26.445109  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:11:26.460778  157655 logs.go:123] Gathering logs for coredns [ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1] ...
	I0108 21:11:26.460830  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1"
	I0108 21:11:26.474428  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:26.544857  157655 logs.go:123] Gathering logs for kube-scheduler [0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d] ...
	I0108 21:11:26.544894  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d"
	I0108 21:11:26.546121  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:26.636244  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:26.649881  157655 logs.go:123] Gathering logs for kube-controller-manager [afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8] ...
	I0108 21:11:26.649918  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8"
	I0108 21:11:26.762392  157655 logs.go:123] Gathering logs for kindnet [372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709] ...
	I0108 21:11:26.762435  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709"
	I0108 21:11:26.853810  157655 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:11:26.853853  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:11:26.974092  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:27.046190  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:27.135871  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:27.473785  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:27.545965  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:27.634650  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:27.974669  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:28.045564  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:28.135341  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:28.474382  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:28.545910  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:28.635243  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:28.975251  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:29.046419  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:29.135944  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:29.432061  157655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:29.446007  157655 api_server.go:72] duration metric: took 1m16.211483467s to wait for apiserver process to appear ...
	I0108 21:11:29.446033  157655 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:11:29.446077  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:11:29.446142  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:11:29.474982  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:29.488057  157655 cri.go:89] found id: "70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c"
	I0108 21:11:29.488085  157655 cri.go:89] found id: ""
	I0108 21:11:29.488093  157655 logs.go:284] 1 containers: [70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c]
	I0108 21:11:29.488146  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:29.492030  157655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:11:29.492095  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:11:29.539826  157655 cri.go:89] found id: "6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576"
	I0108 21:11:29.539912  157655 cri.go:89] found id: ""
	I0108 21:11:29.539930  157655 logs.go:284] 1 containers: [6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576]
	I0108 21:11:29.540000  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:29.544487  157655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:11:29.544587  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:11:29.547064  157655 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:11:29.636064  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:29.649604  157655 cri.go:89] found id: "ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1"
	I0108 21:11:29.649630  157655 cri.go:89] found id: ""
	I0108 21:11:29.649654  157655 logs.go:284] 1 containers: [ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1]
	I0108 21:11:29.649713  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:29.654258  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:11:29.654324  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:11:29.826419  157655 cri.go:89] found id: "0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d"
	I0108 21:11:29.826447  157655 cri.go:89] found id: ""
	I0108 21:11:29.826458  157655 logs.go:284] 1 containers: [0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d]
	I0108 21:11:29.826513  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:29.830736  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:11:29.830802  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:11:29.938905  157655 cri.go:89] found id: "9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959"
	I0108 21:11:29.938977  157655 cri.go:89] found id: ""
	I0108 21:11:29.938992  157655 logs.go:284] 1 containers: [9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959]
	I0108 21:11:29.939054  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:29.943356  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:11:29.943427  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:11:30.019211  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:30.034706  157655 cri.go:89] found id: "afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8"
	I0108 21:11:30.034731  157655 cri.go:89] found id: ""
	I0108 21:11:30.034738  157655 logs.go:284] 1 containers: [afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8]
	I0108 21:11:30.034781  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:30.038502  157655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:11:30.038573  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:11:30.046061  157655 kapi.go:107] duration metric: took 1m10.506770485s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 21:11:30.129997  157655 cri.go:89] found id: "372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709"
	I0108 21:11:30.130023  157655 cri.go:89] found id: ""
	I0108 21:11:30.130034  157655 logs.go:284] 1 containers: [372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709]
	I0108 21:11:30.130094  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:30.133561  157655 logs.go:123] Gathering logs for etcd [6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576] ...
	I0108 21:11:30.133589  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576"
	I0108 21:11:30.135446  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:30.180012  157655 logs.go:123] Gathering logs for coredns [ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1] ...
	I0108 21:11:30.180045  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1"
	I0108 21:11:30.249223  157655 logs.go:123] Gathering logs for kube-scheduler [0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d] ...
	I0108 21:11:30.249253  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d"
	I0108 21:11:30.291210  157655 logs.go:123] Gathering logs for kube-controller-manager [afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8] ...
	I0108 21:11:30.291243  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8"
	I0108 21:11:30.366382  157655 logs.go:123] Gathering logs for kindnet [372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709] ...
	I0108 21:11:30.366417  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709"
	I0108 21:11:30.399028  157655 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:11:30.399055  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:11:30.473423  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:30.488292  157655 logs.go:123] Gathering logs for kubelet ...
	I0108 21:11:30.488323  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:11:30.571825  157655 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:11:30.571888  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:11:30.635713  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:30.667309  157655 logs.go:123] Gathering logs for container status ...
	I0108 21:11:30.667342  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:11:30.706771  157655 logs.go:123] Gathering logs for kube-proxy [9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959] ...
	I0108 21:11:30.706800  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959"
	I0108 21:11:30.740536  157655 logs.go:123] Gathering logs for dmesg ...
	I0108 21:11:30.740564  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:11:30.754717  157655 logs.go:123] Gathering logs for kube-apiserver [70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c] ...
	I0108 21:11:30.754753  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c"
	I0108 21:11:30.974288  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:31.135474  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:31.474076  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:31.635008  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:31.975535  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:32.134799  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:32.519191  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:32.754346  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:11:32.974642  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:33.136099  157655 kapi.go:107] duration metric: took 1m11.004459477s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 21:11:33.139294  157655 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-954584 cluster.
	I0108 21:11:33.141757  157655 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 21:11:33.143374  157655 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 21:11:33.300853  157655 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 21:11:33.306182  157655 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 21:11:33.307285  157655 api_server.go:141] control plane version: v1.28.4
	I0108 21:11:33.307310  157655 api_server.go:131] duration metric: took 3.861269726s to wait for apiserver health ...
	I0108 21:11:33.307318  157655 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:11:33.307339  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:11:33.307382  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:11:33.342554  157655 cri.go:89] found id: "70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c"
	I0108 21:11:33.342575  157655 cri.go:89] found id: ""
	I0108 21:11:33.342583  157655 logs.go:284] 1 containers: [70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c]
	I0108 21:11:33.342640  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:33.346055  157655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:11:33.346122  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:11:33.379060  157655 cri.go:89] found id: "6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576"
	I0108 21:11:33.379086  157655 cri.go:89] found id: ""
	I0108 21:11:33.379097  157655 logs.go:284] 1 containers: [6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576]
	I0108 21:11:33.379146  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:33.382582  157655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:11:33.382653  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:11:33.428513  157655 cri.go:89] found id: "ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1"
	I0108 21:11:33.428534  157655 cri.go:89] found id: ""
	I0108 21:11:33.428541  157655 logs.go:284] 1 containers: [ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1]
	I0108 21:11:33.428601  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:33.432059  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:11:33.432124  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:11:33.468172  157655 cri.go:89] found id: "0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d"
	I0108 21:11:33.468196  157655 cri.go:89] found id: ""
	I0108 21:11:33.468205  157655 logs.go:284] 1 containers: [0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d]
	I0108 21:11:33.468261  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:33.471894  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:11:33.471963  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:11:33.473816  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:33.520561  157655 cri.go:89] found id: "9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959"
	I0108 21:11:33.520581  157655 cri.go:89] found id: ""
	I0108 21:11:33.520592  157655 logs.go:284] 1 containers: [9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959]
	I0108 21:11:33.520645  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:33.523935  157655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:11:33.523999  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:11:33.556253  157655 cri.go:89] found id: "afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8"
	I0108 21:11:33.556279  157655 cri.go:89] found id: ""
	I0108 21:11:33.556287  157655 logs.go:284] 1 containers: [afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8]
	I0108 21:11:33.556338  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:33.559776  157655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:11:33.559839  157655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:11:33.592293  157655 cri.go:89] found id: "372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709"
	I0108 21:11:33.592332  157655 cri.go:89] found id: ""
	I0108 21:11:33.592343  157655 logs.go:284] 1 containers: [372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709]
	I0108 21:11:33.592391  157655 ssh_runner.go:195] Run: which crictl
	I0108 21:11:33.595657  157655 logs.go:123] Gathering logs for kubelet ...
	I0108 21:11:33.595686  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:11:33.675961  157655 logs.go:123] Gathering logs for kube-apiserver [70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c] ...
	I0108 21:11:33.675999  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c"
	I0108 21:11:33.719368  157655 logs.go:123] Gathering logs for kube-controller-manager [afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8] ...
	I0108 21:11:33.719403  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8"
	I0108 21:11:33.774651  157655 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:11:33.774683  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:11:33.848537  157655 logs.go:123] Gathering logs for kube-scheduler [0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d] ...
	I0108 21:11:33.848571  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d"
	I0108 21:11:33.886584  157655 logs.go:123] Gathering logs for kube-proxy [9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959] ...
	I0108 21:11:33.886619  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959"
	I0108 21:11:33.932766  157655 logs.go:123] Gathering logs for kindnet [372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709] ...
	I0108 21:11:33.932794  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709"
	I0108 21:11:33.969768  157655 logs.go:123] Gathering logs for container status ...
	I0108 21:11:33.969804  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:11:34.017042  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:34.058957  157655 logs.go:123] Gathering logs for dmesg ...
	I0108 21:11:34.058988  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:11:34.073668  157655 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:11:34.073703  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:11:34.422807  157655 logs.go:123] Gathering logs for etcd [6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576] ...
	I0108 21:11:34.422848  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576"
	I0108 21:11:34.519119  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:34.543646  157655 logs.go:123] Gathering logs for coredns [ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1] ...
	I0108 21:11:34.543685  157655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1"
	I0108 21:11:34.974253  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:35.473358  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:35.975627  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:36.474049  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:36.978839  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:37.139901  157655 system_pods.go:59] 19 kube-system pods found
	I0108 21:11:37.139938  157655 system_pods.go:61] "coredns-5dd5756b68-b4v8l" [4c91101d-1a5e-4e38-9320-211910f9ab71] Running
	I0108 21:11:37.139946  157655 system_pods.go:61] "csi-hostpath-attacher-0" [c908decc-f493-49d4-8686-37e93d7220d3] Running
	I0108 21:11:37.139954  157655 system_pods.go:61] "csi-hostpath-resizer-0" [f26db14f-24a9-419f-93d7-e667a5efa757] Running
	I0108 21:11:37.139965  157655 system_pods.go:61] "csi-hostpathplugin-kd45v" [8c95f51e-c182-472f-8ff7-5da175ee7a74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 21:11:37.139974  157655 system_pods.go:61] "etcd-addons-954584" [c27620dd-f91f-4dec-b5c4-f9f484dfe509] Running
	I0108 21:11:37.139987  157655 system_pods.go:61] "kindnet-bgpl6" [e640299c-dd60-49bf-94ff-376b32108dcc] Running
	I0108 21:11:37.140001  157655 system_pods.go:61] "kube-apiserver-addons-954584" [7d330784-a310-4d78-933f-6e81b0614a59] Running
	I0108 21:11:37.140012  157655 system_pods.go:61] "kube-controller-manager-addons-954584" [6d416252-141a-4f17-a168-fcd6373b2a0b] Running
	I0108 21:11:37.140021  157655 system_pods.go:61] "kube-ingress-dns-minikube" [cd725efb-f246-46e3-810b-944fae6e2733] Running
	I0108 21:11:37.140031  157655 system_pods.go:61] "kube-proxy-8dlx5" [2cc52d94-88b9-4ea2-927e-61b80d6f99c7] Running
	I0108 21:11:37.140042  157655 system_pods.go:61] "kube-scheduler-addons-954584" [2f88212c-f16f-42a4-a192-ad128d2ce97a] Running
	I0108 21:11:37.140052  157655 system_pods.go:61] "metrics-server-7c66d45ddc-5zm94" [1333b768-bab4-4c7f-9cb2-984cb4bafd4e] Running
	I0108 21:11:37.140062  157655 system_pods.go:61] "nvidia-device-plugin-daemonset-m7f7g" [39bbe5de-807a-4462-af6d-a7c9fe467dd8] Running
	I0108 21:11:37.140072  157655 system_pods.go:61] "registry-mjp6n" [2b788f31-0fbc-4a01-9482-8c2af240ed16] Running
	I0108 21:11:37.140079  157655 system_pods.go:61] "registry-proxy-lqthv" [1b8d1aac-9fd6-4221-beea-1a33b3cf142f] Running
	I0108 21:11:37.140088  157655 system_pods.go:61] "snapshot-controller-58dbcc7b99-24x4r" [d26613ad-9810-4237-a1a9-6a78f3df28e6] Running
	I0108 21:11:37.140094  157655 system_pods.go:61] "snapshot-controller-58dbcc7b99-bxjz2" [27defdfb-38fe-4c58-a544-9d2945e38d50] Running
	I0108 21:11:37.140104  157655 system_pods.go:61] "storage-provisioner" [c32a88c8-9b35-49ef-87a0-64eaf2184747] Running
	I0108 21:11:37.140115  157655 system_pods.go:61] "tiller-deploy-7b677967b9-vxnlw" [c122796b-ffb5-464a-bcc9-4c4b75bcc423] Running
	I0108 21:11:37.140127  157655 system_pods.go:74] duration metric: took 3.832802864s to wait for pod list to return data ...
	I0108 21:11:37.140140  157655 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:11:37.142430  157655 default_sa.go:45] found service account: "default"
	I0108 21:11:37.142456  157655 default_sa.go:55] duration metric: took 2.306038ms for default service account to be created ...
	I0108 21:11:37.142467  157655 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:11:37.150766  157655 system_pods.go:86] 19 kube-system pods found
	I0108 21:11:37.150793  157655 system_pods.go:89] "coredns-5dd5756b68-b4v8l" [4c91101d-1a5e-4e38-9320-211910f9ab71] Running
	I0108 21:11:37.150799  157655 system_pods.go:89] "csi-hostpath-attacher-0" [c908decc-f493-49d4-8686-37e93d7220d3] Running
	I0108 21:11:37.150804  157655 system_pods.go:89] "csi-hostpath-resizer-0" [f26db14f-24a9-419f-93d7-e667a5efa757] Running
	I0108 21:11:37.150811  157655 system_pods.go:89] "csi-hostpathplugin-kd45v" [8c95f51e-c182-472f-8ff7-5da175ee7a74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 21:11:37.150816  157655 system_pods.go:89] "etcd-addons-954584" [c27620dd-f91f-4dec-b5c4-f9f484dfe509] Running
	I0108 21:11:37.150823  157655 system_pods.go:89] "kindnet-bgpl6" [e640299c-dd60-49bf-94ff-376b32108dcc] Running
	I0108 21:11:37.150830  157655 system_pods.go:89] "kube-apiserver-addons-954584" [7d330784-a310-4d78-933f-6e81b0614a59] Running
	I0108 21:11:37.150844  157655 system_pods.go:89] "kube-controller-manager-addons-954584" [6d416252-141a-4f17-a168-fcd6373b2a0b] Running
	I0108 21:11:37.150855  157655 system_pods.go:89] "kube-ingress-dns-minikube" [cd725efb-f246-46e3-810b-944fae6e2733] Running
	I0108 21:11:37.150864  157655 system_pods.go:89] "kube-proxy-8dlx5" [2cc52d94-88b9-4ea2-927e-61b80d6f99c7] Running
	I0108 21:11:37.150874  157655 system_pods.go:89] "kube-scheduler-addons-954584" [2f88212c-f16f-42a4-a192-ad128d2ce97a] Running
	I0108 21:11:37.150881  157655 system_pods.go:89] "metrics-server-7c66d45ddc-5zm94" [1333b768-bab4-4c7f-9cb2-984cb4bafd4e] Running
	I0108 21:11:37.150895  157655 system_pods.go:89] "nvidia-device-plugin-daemonset-m7f7g" [39bbe5de-807a-4462-af6d-a7c9fe467dd8] Running
	I0108 21:11:37.150902  157655 system_pods.go:89] "registry-mjp6n" [2b788f31-0fbc-4a01-9482-8c2af240ed16] Running
	I0108 21:11:37.150909  157655 system_pods.go:89] "registry-proxy-lqthv" [1b8d1aac-9fd6-4221-beea-1a33b3cf142f] Running
	I0108 21:11:37.150916  157655 system_pods.go:89] "snapshot-controller-58dbcc7b99-24x4r" [d26613ad-9810-4237-a1a9-6a78f3df28e6] Running
	I0108 21:11:37.150923  157655 system_pods.go:89] "snapshot-controller-58dbcc7b99-bxjz2" [27defdfb-38fe-4c58-a544-9d2945e38d50] Running
	I0108 21:11:37.150930  157655 system_pods.go:89] "storage-provisioner" [c32a88c8-9b35-49ef-87a0-64eaf2184747] Running
	I0108 21:11:37.150938  157655 system_pods.go:89] "tiller-deploy-7b677967b9-vxnlw" [c122796b-ffb5-464a-bcc9-4c4b75bcc423] Running
	I0108 21:11:37.150950  157655 system_pods.go:126] duration metric: took 8.474121ms to wait for k8s-apps to be running ...
	I0108 21:11:37.150967  157655 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:11:37.151016  157655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:11:37.162414  157655 system_svc.go:56] duration metric: took 11.439662ms WaitForService to wait for kubelet.
	I0108 21:11:37.162440  157655 kubeadm.go:581] duration metric: took 1m23.92792497s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:11:37.162463  157655 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:11:37.165011  157655 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:11:37.165034  157655 node_conditions.go:123] node cpu capacity is 8
	I0108 21:11:37.165046  157655 node_conditions.go:105] duration metric: took 2.578723ms to run NodePressure ...
	I0108 21:11:37.165060  157655 start.go:228] waiting for startup goroutines ...
	I0108 21:11:37.473299  157655 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:11:37.974067  157655 kapi.go:107] duration metric: took 1m17.505876155s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 21:11:37.975852  157655 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, helm-tiller, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0108 21:11:37.976960  157655 addons.go:508] enable addons completed in 1m25.257895774s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner default-storageclass storage-provisioner-rancher inspektor-gadget helm-tiller metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0108 21:11:37.977000  157655 start.go:233] waiting for cluster config update ...
	I0108 21:11:37.977026  157655 start.go:242] writing updated cluster config ...
	I0108 21:11:37.977303  157655 ssh_runner.go:195] Run: rm -f paused
	I0108 21:11:38.026048  157655 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:11:38.027847  157655 out.go:177] * Done! kubectl is now configured to use "addons-954584" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.293207942Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=68d41d64-74a1-484e-9a40-8a160fffead7 name=/runtime.v1.ImageService/PullImage
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.294028861Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=0164bd6b-3d55-4bdd-90e1-4894ce4025c5 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.294881294Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=0164bd6b-3d55-4bdd-90e1-4894ce4025c5 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.295651764Z" level=info msg="Creating container: default/hello-world-app-5d77478584-p6krk/hello-world-app" id=27cdcfac-e351-4546-9d4d-19e5d3e44eaa name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.295762064Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.396887039Z" level=info msg="Created container 26ebb9187f26a537ccc629fc6364172ec64638831c9ab162bbf47886b45ce5fc: default/hello-world-app-5d77478584-p6krk/hello-world-app" id=27cdcfac-e351-4546-9d4d-19e5d3e44eaa name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.397478261Z" level=info msg="Starting container: 26ebb9187f26a537ccc629fc6364172ec64638831c9ab162bbf47886b45ce5fc" id=63b81db5-8de0-4cdb-8720-4eabd7721e71 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.404468924Z" level=info msg="Started container" PID=10485 containerID=26ebb9187f26a537ccc629fc6364172ec64638831c9ab162bbf47886b45ce5fc description=default/hello-world-app-5d77478584-p6krk/hello-world-app id=63b81db5-8de0-4cdb-8720-4eabd7721e71 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c43e3ba970465b850096d59c33a1f49dd52fa1941df39b5538608dca7c21f36e
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.450845089Z" level=info msg="Removing container: 7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f" id=dbc841bf-689c-4bb7-8c7c-a023e7022077 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 21:14:28 addons-954584 crio[949]: time="2024-01-08 21:14:28.465334497Z" level=info msg="Removed container 7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=dbc841bf-689c-4bb7-8c7c-a023e7022077 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 21:14:30 addons-954584 crio[949]: time="2024-01-08 21:14:30.022607461Z" level=info msg="Stopping container: e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee (timeout: 2s)" id=9cb76d60-29c6-43d3-9405-ae94f1de3495 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.029234493Z" level=warning msg="Stopping container e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=9cb76d60-29c6-43d3-9405-ae94f1de3495 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 21:14:32 addons-954584 conmon[5912]: conmon e2a8be4ce26ef5191fa0 <ninfo>: container 5924 exited with status 137
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.160101753Z" level=info msg="Stopped container e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee: ingress-nginx/ingress-nginx-controller-69cff4fd79-jjj75/controller" id=9cb76d60-29c6-43d3-9405-ae94f1de3495 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.160743640Z" level=info msg="Stopping pod sandbox: 51ed11a7362e6df402c51475e9b14e732c5f78bc7c90134f21696a6f6465cc0f" id=a5e87c7a-3af8-400a-912f-7dceebc2b976 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.163696851Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-X2EEONQW73ETV3CU - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-6KXFJFOZN2ZMWKSB - [0:0]\n-X KUBE-HP-6KXFJFOZN2ZMWKSB\n-X KUBE-HP-X2EEONQW73ETV3CU\nCOMMIT\n"
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.165015830Z" level=info msg="Closing host port tcp:80"
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.165059138Z" level=info msg="Closing host port tcp:443"
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.166494114Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.166517739Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.166687459Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-jjj75 Namespace:ingress-nginx ID:51ed11a7362e6df402c51475e9b14e732c5f78bc7c90134f21696a6f6465cc0f UID:0b3d7df4-5c38-4257-8039-acdb28194b68 NetNS:/var/run/netns/16387939-c4bb-4330-9f84-ec37829d2cd0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.166856147Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-jjj75 from CNI network \"kindnet\" (type=ptp)"
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.211022278Z" level=info msg="Stopped pod sandbox: 51ed11a7362e6df402c51475e9b14e732c5f78bc7c90134f21696a6f6465cc0f" id=a5e87c7a-3af8-400a-912f-7dceebc2b976 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.460537360Z" level=info msg="Removing container: e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee" id=58c5ffe3-934a-4bff-91ee-5e1b1a695820 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 21:14:32 addons-954584 crio[949]: time="2024-01-08 21:14:32.474814914Z" level=info msg="Removed container e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee: ingress-nginx/ingress-nginx-controller-69cff4fd79-jjj75/controller" id=58c5ffe3-934a-4bff-91ee-5e1b1a695820 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26ebb9187f26a       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   c43e3ba970465       hello-world-app-5d77478584-p6krk
	d7293a2dc89ed       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   3db2aa8576021       nginx
	9a2739e9d4c44       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   1866834863f13       headlamp-7ddfbb94ff-9hf2t
	b8b2575b8da0d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   015b03e6c3c44       gcp-auth-d4c87556c-4jndl
	b4f475d34e3f3       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   5680d6c94d445       yakd-dashboard-9947fc6bf-6jr4v
	66ac15e6f6739       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   cf13c3dca48c3       ingress-nginx-admission-patch-x2qfd
	820ce7d568dae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   759e8b0efe450       ingress-nginx-admission-create-l8zqv
	93c9fc2eeaa0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   0525e25250511       storage-provisioner
	ae69dc4b7cc08       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   69973368b3520       coredns-5dd5756b68-b4v8l
	372d2f0fe0aee       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   02773b16fe19a       kindnet-bgpl6
	9e1482203c36f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   c544dfb2076b8       kube-proxy-8dlx5
	0f99f418a818d       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   af169c66f2124       kube-scheduler-addons-954584
	6d3f9638bc6d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   776db13e1d574       etcd-addons-954584
	70e4735350c76       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   07e02bb8481a9       kube-apiserver-addons-954584
	afa97e177c613       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   bf00124a1d05a       kube-controller-manager-addons-954584
	
	
	==> coredns [ae69dc4b7cc0866b8767bea1efd694a24e4f1564d622ecdd3a55d8c362becdc1] <==
	[INFO] 10.244.0.19:57938 - 27340 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106929s
	[INFO] 10.244.0.19:52092 - 33907 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003332613s
	[INFO] 10.244.0.19:52092 - 8564 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003926342s
	[INFO] 10.244.0.19:60432 - 62608 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004170318s
	[INFO] 10.244.0.19:60432 - 39828 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005864163s
	[INFO] 10.244.0.19:34204 - 27664 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004065303s
	[INFO] 10.244.0.19:34204 - 14349 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005389675s
	[INFO] 10.244.0.19:43995 - 17396 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006508s
	[INFO] 10.244.0.19:43995 - 47094 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096354s
	[INFO] 10.244.0.21:52389 - 11197 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000227804s
	[INFO] 10.244.0.21:46484 - 42766 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00029102s
	[INFO] 10.244.0.21:59444 - 53148 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140154s
	[INFO] 10.244.0.21:57820 - 13830 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179458s
	[INFO] 10.244.0.21:43626 - 47489 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010448s
	[INFO] 10.244.0.21:37890 - 9126 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000177508s
	[INFO] 10.244.0.21:50067 - 53515 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005875659s
	[INFO] 10.244.0.21:35851 - 7367 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007195339s
	[INFO] 10.244.0.21:57511 - 24084 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006056208s
	[INFO] 10.244.0.21:44850 - 3070 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007703169s
	[INFO] 10.244.0.21:49252 - 55815 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005109081s
	[INFO] 10.244.0.21:41608 - 25984 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006674574s
	[INFO] 10.244.0.21:52489 - 36488 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000559806s
	[INFO] 10.244.0.21:43973 - 50582 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000686211s
	[INFO] 10.244.0.24:32791 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001751s
	[INFO] 10.244.0.24:53891 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164709s
	
	
	==> describe nodes <==
	Name:               addons-954584
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-954584
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=addons-954584
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_10_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-954584
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:09:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-954584
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:14:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:14:35 +0000   Mon, 08 Jan 2024 21:09:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:14:35 +0000   Mon, 08 Jan 2024 21:09:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:14:35 +0000   Mon, 08 Jan 2024 21:09:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:14:35 +0000   Mon, 08 Jan 2024 21:10:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-954584
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 005d28571ad0407cb77575ae8e5b049a
	  System UUID:                b50b1fe9-8460-4d1e-84f3-003a292c5503
	  Boot ID:                    b9c55cc6-3d64-43dc-b6f4-c38d0ea8cf14
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-p6krk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-4jndl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  headlamp                    headlamp-7ddfbb94ff-9hf2t                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                 coredns-5dd5756b68-b4v8l                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m25s
	  kube-system                 etcd-addons-954584                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m38s
	  kube-system                 kindnet-bgpl6                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m25s
	  kube-system                 kube-apiserver-addons-954584             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-controller-manager-addons-954584    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-proxy-8dlx5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-scheduler-addons-954584             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-6jr4v           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m20s  kube-proxy       
	  Normal  Starting                 4m38s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m38s  kubelet          Node addons-954584 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s  kubelet          Node addons-954584 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s  kubelet          Node addons-954584 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m26s  node-controller  Node addons-954584 event: Registered Node addons-954584 in Controller
	  Normal  NodeReady                3m52s  kubelet          Node addons-954584 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan 8 21:12] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 24 53 89 36 fb 7e c0 f8 ca 49 7b 08 00
	[  +1.004184] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 24 53 89 36 fb 7e c0 f8 ca 49 7b 08 00
	[  +2.015807] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e 24 53 89 36 fb 7e c0 f8 ca 49 7b 08 00
	[  +4.063616] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 24 53 89 36 fb 7e c0 f8 ca 49 7b 08 00
	[  +8.191207] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e 24 53 89 36 fb 7e c0 f8 ca 49 7b 08 00
	[ +16.126437] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e 24 53 89 36 fb 7e c0 f8 ca 49 7b 08 00
	[Jan 8 21:13] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 9e 24 53 89 36 fb 7e c0 f8 ca 49 7b 08 00
	
	
	==> etcd [6d3f9638bc6d45aaba773dc969fde485846a118fe66e1a791b37f1a1f906c576] <==
	{"level":"info","ts":"2024-01-08T21:10:15.615114Z","caller":"traceutil/trace.go:171","msg":"trace[591015247] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"199.622399ms","start":"2024-01-08T21:10:15.415482Z","end":"2024-01-08T21:10:15.615104Z","steps":["trace[591015247] 'process raft request'  (duration: 199.09331ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:10:15.923044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.12844ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128026368057863305 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/expand-controller\" mod_revision:279 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" value_size:134 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-08T21:10:15.924211Z","caller":"traceutil/trace.go:171","msg":"trace[437869092] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"199.128204ms","start":"2024-01-08T21:10:15.725061Z","end":"2024-01-08T21:10:15.924189Z","steps":["trace[437869092] 'compare'  (duration: 195.933054ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:10:16.136066Z","caller":"traceutil/trace.go:171","msg":"trace[1767736818] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"119.178342ms","start":"2024-01-08T21:10:16.016866Z","end":"2024-01-08T21:10:16.136045Z","steps":["trace[1767736818] 'process raft request'  (duration: 119.050699ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:10:16.216508Z","caller":"traceutil/trace.go:171","msg":"trace[1375948221] linearizableReadLoop","detail":"{readStateIndex:400; appliedIndex:400; }","duration":"194.643239ms","start":"2024-01-08T21:10:16.021837Z","end":"2024-01-08T21:10:16.21648Z","steps":["trace[1375948221] 'read index received'  (duration: 194.630874ms)","trace[1375948221] 'applied index is now lower than readState.Index'  (duration: 9.937µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:10:16.321347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.521936ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:10:16.322485Z","caller":"traceutil/trace.go:171","msg":"trace[443331931] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:387; }","duration":"300.66676ms","start":"2024-01-08T21:10:16.021801Z","end":"2024-01-08T21:10:16.322468Z","steps":["trace[443331931] 'agreement among raft nodes before linearized reading'  (duration: 198.248972ms)","trace[443331931] 'range keys from in-memory index tree'  (duration: 101.244842ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:10:16.32263Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:10:16.021787Z","time spent":"300.832021ms","remote":"127.0.0.1:42352","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-01-08T21:10:16.321878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.748739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:10:16.322988Z","caller":"traceutil/trace.go:171","msg":"trace[76027065] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:390; }","duration":"103.867113ms","start":"2024-01-08T21:10:16.219112Z","end":"2024-01-08T21:10:16.322979Z","steps":["trace[76027065] 'agreement among raft nodes before linearized reading'  (duration: 102.667336ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:10:16.322043Z","caller":"traceutil/trace.go:171","msg":"trace[1068792511] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"284.681071ms","start":"2024-01-08T21:10:16.037342Z","end":"2024-01-08T21:10:16.322023Z","steps":["trace[1068792511] 'process raft request'  (duration: 284.335935ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:10:16.322079Z","caller":"traceutil/trace.go:171","msg":"trace[460090544] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"284.92626ms","start":"2024-01-08T21:10:16.037146Z","end":"2024-01-08T21:10:16.322073Z","steps":["trace[460090544] 'process raft request'  (duration: 281.360787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:10:16.32232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.835316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:10:16.324687Z","caller":"traceutil/trace.go:171","msg":"trace[1756227213] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:390; }","duration":"105.19784ms","start":"2024-01-08T21:10:16.219474Z","end":"2024-01-08T21:10:16.324672Z","steps":["trace[1756227213] 'agreement among raft nodes before linearized reading'  (duration: 102.820333ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:10:16.522867Z","caller":"traceutil/trace.go:171","msg":"trace[126255101] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:405; }","duration":"102.865083ms","start":"2024-01-08T21:10:16.419978Z","end":"2024-01-08T21:10:16.522843Z","steps":["trace[126255101] 'read index received'  (duration: 94.598602ms)","trace[126255101] 'applied index is now lower than readState.Index'  (duration: 8.265632ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:10:16.52304Z","caller":"traceutil/trace.go:171","msg":"trace[1614227995] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"103.107983ms","start":"2024-01-08T21:10:16.419914Z","end":"2024-01-08T21:10:16.523022Z","steps":["trace[1614227995] 'process raft request'  (duration: 102.790439ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:10:16.52312Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.13803ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:10:16.526724Z","caller":"traceutil/trace.go:171","msg":"trace[1554190389] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:393; }","duration":"106.747275ms","start":"2024-01-08T21:10:16.419958Z","end":"2024-01-08T21:10:16.526705Z","steps":["trace[1554190389] 'agreement among raft nodes before linearized reading'  (duration: 103.112277ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:10:16.528258Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.11694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-01-08T21:10:16.528357Z","caller":"traceutil/trace.go:171","msg":"trace[2006767106] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:394; }","duration":"108.232215ms","start":"2024-01-08T21:10:16.420113Z","end":"2024-01-08T21:10:16.528345Z","steps":["trace[2006767106] 'agreement among raft nodes before linearized reading'  (duration: 108.020445ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:11:32.752419Z","caller":"traceutil/trace.go:171","msg":"trace[521211924] linearizableReadLoop","detail":"{readStateIndex:1206; appliedIndex:1205; }","duration":"118.484348ms","start":"2024-01-08T21:11:32.6339Z","end":"2024-01-08T21:11:32.752384Z","steps":["trace[521211924] 'read index received'  (duration: 114.499513ms)","trace[521211924] 'applied index is now lower than readState.Index'  (duration: 3.983652ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:11:32.752585Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.685423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11686"}
	{"level":"info","ts":"2024-01-08T21:11:32.752636Z","caller":"traceutil/trace.go:171","msg":"trace[1520034612] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1171; }","duration":"118.757521ms","start":"2024-01-08T21:11:32.633866Z","end":"2024-01-08T21:11:32.752623Z","steps":["trace[1520034612] 'agreement among raft nodes before linearized reading'  (duration: 118.601424ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:12:01.20332Z","caller":"traceutil/trace.go:171","msg":"trace[166050559] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"111.978732ms","start":"2024-01-08T21:12:01.091316Z","end":"2024-01-08T21:12:01.203295Z","steps":["trace[166050559] 'process raft request'  (duration: 46.054246ms)","trace[166050559] 'compare'  (duration: 65.712439ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:12:17.289432Z","caller":"traceutil/trace.go:171","msg":"trace[1988416924] transaction","detail":"{read_only:false; response_revision:1561; number_of_response:1; }","duration":"119.267553ms","start":"2024-01-08T21:12:17.170145Z","end":"2024-01-08T21:12:17.289412Z","steps":["trace[1988416924] 'process raft request'  (duration: 119.169646ms)"],"step_count":1}
	
	
	==> gcp-auth [b8b2575b8da0dce0531c09a5c790c277b87a49f758f65c2fc8be04c69196909b] <==
	2024/01/08 21:11:32 GCP Auth Webhook started!
	2024/01/08 21:11:39 Ready to marshal response ...
	2024/01/08 21:11:39 Ready to write response ...
	2024/01/08 21:11:39 Ready to marshal response ...
	2024/01/08 21:11:39 Ready to write response ...
	2024/01/08 21:11:39 Ready to marshal response ...
	2024/01/08 21:11:39 Ready to write response ...
	2024/01/08 21:11:44 Ready to marshal response ...
	2024/01/08 21:11:44 Ready to write response ...
	2024/01/08 21:11:48 Ready to marshal response ...
	2024/01/08 21:11:48 Ready to write response ...
	2024/01/08 21:11:51 Ready to marshal response ...
	2024/01/08 21:11:51 Ready to write response ...
	2024/01/08 21:11:52 Ready to marshal response ...
	2024/01/08 21:11:52 Ready to write response ...
	2024/01/08 21:12:01 Ready to marshal response ...
	2024/01/08 21:12:01 Ready to write response ...
	2024/01/08 21:12:06 Ready to marshal response ...
	2024/01/08 21:12:06 Ready to write response ...
	2024/01/08 21:12:17 Ready to marshal response ...
	2024/01/08 21:12:17 Ready to write response ...
	2024/01/08 21:12:33 Ready to marshal response ...
	2024/01/08 21:12:33 Ready to write response ...
	2024/01/08 21:14:27 Ready to marshal response ...
	2024/01/08 21:14:27 Ready to write response ...
	
	
	==> kernel <==
	 21:14:37 up  3:57,  0 users,  load average: 0.27, 1.48, 1.99
	Linux addons-954584 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [372d2f0fe0aee88d74e2e2d05911d44f31f969d49aa3d7b59154af14b0b2a709] <==
	I0108 21:12:35.577079       1 main.go:227] handling current node
	I0108 21:12:45.588827       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:12:45.588850       1 main.go:227] handling current node
	I0108 21:12:55.592331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:12:55.592355       1 main.go:227] handling current node
	I0108 21:13:05.596978       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:13:05.597004       1 main.go:227] handling current node
	I0108 21:13:15.600243       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:13:15.600266       1 main.go:227] handling current node
	I0108 21:13:25.603818       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:13:25.603839       1 main.go:227] handling current node
	I0108 21:13:35.616581       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:13:35.616604       1 main.go:227] handling current node
	I0108 21:13:45.620853       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:13:45.620875       1 main.go:227] handling current node
	I0108 21:13:55.632931       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:13:55.632951       1 main.go:227] handling current node
	I0108 21:14:05.636957       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:14:05.636980       1 main.go:227] handling current node
	I0108 21:14:15.649034       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:14:15.649059       1 main.go:227] handling current node
	I0108 21:14:25.652881       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:14:25.652908       1 main.go:227] handling current node
	I0108 21:14:35.656862       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:14:35.656885       1 main.go:227] handling current node
	
	
	==> kube-apiserver [70e4735350c76cff6df5f1b1e60d9b9d7c344bfeef1a1499b802f7d3964bac1c] <==
	I0108 21:12:06.064095       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0108 21:12:06.267895       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.236.16"}
	E0108 21:12:17.639722       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0108 21:12:29.544339       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0108 21:12:48.477986       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:12:48.478044       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:12:48.483895       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:12:48.483953       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:12:48.491335       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:12:48.491384       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:12:48.491657       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:12:48.491692       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:12:48.499919       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:12:48.499975       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:12:48.506808       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:12:48.506861       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:12:48.516794       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:12:48.516836       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:12:48.518735       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:12:48.518768       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0108 21:12:49.492453       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 21:12:49.517078       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 21:12:49.529279       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 21:13:02.689931       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0108 21:14:27.287763       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.228.4"}
	
	
	==> kube-controller-manager [afa97e177c613aef2f86294a95172d48364c4e4272bffca6ac5338678a39eaf8] <==
	E0108 21:13:24.087777       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:13:32.386516       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:13:32.386549       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:13:53.799422       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:13:53.799464       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:14:02.360264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:14:02.360299       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:14:04.334534       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:14:04.334565       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:14:05.652302       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:14:05.652337       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 21:14:27.127261       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0108 21:14:27.137528       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-p6krk"
	I0108 21:14:27.141801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.021442ms"
	I0108 21:14:27.146783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.921894ms"
	I0108 21:14:27.146874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.725µs"
	I0108 21:14:27.146958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.548µs"
	I0108 21:14:27.152171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="64.532µs"
	I0108 21:14:28.464343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.603945ms"
	I0108 21:14:28.464430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.449µs"
	I0108 21:14:29.004470       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 21:14:29.005983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="4.044µs"
	I0108 21:14:29.015849       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0108 21:14:35.615261       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:14:35.615298       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [9e1482203c36fa45ef77e65d0b72b0e3747030a04bd4d5c43b167fe17488f959] <==
	I0108 21:10:14.930785       1 server_others.go:69] "Using iptables proxy"
	I0108 21:10:15.424688       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 21:10:16.732257       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 21:10:16.831741       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:10:16.831882       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 21:10:16.831933       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 21:10:16.831990       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:10:16.832306       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:10:16.832773       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:10:16.835933       1 config.go:188] "Starting service config controller"
	I0108 21:10:16.836025       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:10:16.836106       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:10:16.913791       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:10:16.836192       1 config.go:315] "Starting node config controller"
	I0108 21:10:16.913957       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:10:16.937313       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:10:17.014014       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:10:17.014137       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0f99f418a818d2f5a2d41f1b1d3d38e46d5c9b1a3c412b362ee4492b34f4551d] <==
	W0108 21:09:56.223586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:09:56.223802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:09:56.223555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:09:56.223826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:09:56.223656       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:09:56.223843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:09:56.223689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:09:56.223859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:09:56.223689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:09:56.223876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:09:56.223712       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:09:56.223891       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:09:57.043198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:09:57.043234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:09:57.045316       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:09:57.045342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:09:57.097846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:09:57.097884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:09:57.209566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:09:57.209598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:09:57.270483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:09:57.270521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:09:57.309752       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:09:57.309790       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 21:10:00.118548       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 21:14:27 addons-954584 kubelet[1553]: I0108 21:14:27.292498    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/48791be8-53d5-44f0-9091-e11349863580-gcp-creds\") pod \"hello-world-app-5d77478584-p6krk\" (UID: \"48791be8-53d5-44f0-9091-e11349863580\") " pod="default/hello-world-app-5d77478584-p6krk"
	Jan 08 21:14:27 addons-954584 kubelet[1553]: I0108 21:14:27.292571    1553 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlf6h\" (UniqueName: \"kubernetes.io/projected/48791be8-53d5-44f0-9091-e11349863580-kube-api-access-dlf6h\") pod \"hello-world-app-5d77478584-p6krk\" (UID: \"48791be8-53d5-44f0-9091-e11349863580\") " pod="default/hello-world-app-5d77478584-p6krk"
	Jan 08 21:14:27 addons-954584 kubelet[1553]: W0108 21:14:27.550324    1553 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/0e96d226e35e9edefe37da6406b1ca9031e05a066c9e0223fe573806ce93515e/crio-c43e3ba970465b850096d59c33a1f49dd52fa1941df39b5538608dca7c21f36e WatchSource:0}: Error finding container c43e3ba970465b850096d59c33a1f49dd52fa1941df39b5538608dca7c21f36e: Status 404 returned error can't find the container with id c43e3ba970465b850096d59c33a1f49dd52fa1941df39b5538608dca7c21f36e
	Jan 08 21:14:28 addons-954584 kubelet[1553]: I0108 21:14:28.421614    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxpcf\" (UniqueName: \"kubernetes.io/projected/cd725efb-f246-46e3-810b-944fae6e2733-kube-api-access-pxpcf\") pod \"cd725efb-f246-46e3-810b-944fae6e2733\" (UID: \"cd725efb-f246-46e3-810b-944fae6e2733\") "
	Jan 08 21:14:28 addons-954584 kubelet[1553]: I0108 21:14:28.423369    1553 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd725efb-f246-46e3-810b-944fae6e2733-kube-api-access-pxpcf" (OuterVolumeSpecName: "kube-api-access-pxpcf") pod "cd725efb-f246-46e3-810b-944fae6e2733" (UID: "cd725efb-f246-46e3-810b-944fae6e2733"). InnerVolumeSpecName "kube-api-access-pxpcf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 21:14:28 addons-954584 kubelet[1553]: I0108 21:14:28.449796    1553 scope.go:117] "RemoveContainer" containerID="7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f"
	Jan 08 21:14:28 addons-954584 kubelet[1553]: I0108 21:14:28.457371    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-p6krk" podStartSLOduration=0.71772168 podCreationTimestamp="2024-01-08 21:14:27 +0000 UTC" firstStartedPulling="2024-01-08 21:14:27.553922075 +0000 UTC m=+268.380994060" lastFinishedPulling="2024-01-08 21:14:28.293518387 +0000 UTC m=+269.120590363" observedRunningTime="2024-01-08 21:14:28.456584348 +0000 UTC m=+269.283656342" watchObservedRunningTime="2024-01-08 21:14:28.457317983 +0000 UTC m=+269.284389978"
	Jan 08 21:14:28 addons-954584 kubelet[1553]: I0108 21:14:28.465577    1553 scope.go:117] "RemoveContainer" containerID="7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f"
	Jan 08 21:14:28 addons-954584 kubelet[1553]: E0108 21:14:28.465957    1553 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f\": container with ID starting with 7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f not found: ID does not exist" containerID="7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f"
	Jan 08 21:14:28 addons-954584 kubelet[1553]: I0108 21:14:28.466010    1553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f"} err="failed to get container status \"7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f\": rpc error: code = NotFound desc = could not find container \"7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f\": container with ID starting with 7bfc2dd5cb7d9558411696be2ad5c8001d0ce03c887e4584f271c657bdcd316f not found: ID does not exist"
	Jan 08 21:14:28 addons-954584 kubelet[1553]: I0108 21:14:28.522273    1553 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pxpcf\" (UniqueName: \"kubernetes.io/projected/cd725efb-f246-46e3-810b-944fae6e2733-kube-api-access-pxpcf\") on node \"addons-954584\" DevicePath \"\""
	Jan 08 21:14:29 addons-954584 kubelet[1553]: I0108 21:14:29.316091    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="075e7ecc-4217-44b4-8cc8-bb09d0c1b9b0" path="/var/lib/kubelet/pods/075e7ecc-4217-44b4-8cc8-bb09d0c1b9b0/volumes"
	Jan 08 21:14:29 addons-954584 kubelet[1553]: I0108 21:14:29.316511    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="53337735-2864-4f74-b804-1ad0b8b4e460" path="/var/lib/kubelet/pods/53337735-2864-4f74-b804-1ad0b8b4e460/volumes"
	Jan 08 21:14:29 addons-954584 kubelet[1553]: I0108 21:14:29.316893    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cd725efb-f246-46e3-810b-944fae6e2733" path="/var/lib/kubelet/pods/cd725efb-f246-46e3-810b-944fae6e2733/volumes"
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.350073    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b3d7df4-5c38-4257-8039-acdb28194b68-webhook-cert\") pod \"0b3d7df4-5c38-4257-8039-acdb28194b68\" (UID: \"0b3d7df4-5c38-4257-8039-acdb28194b68\") "
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.350132    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dphjx\" (UniqueName: \"kubernetes.io/projected/0b3d7df4-5c38-4257-8039-acdb28194b68-kube-api-access-dphjx\") pod \"0b3d7df4-5c38-4257-8039-acdb28194b68\" (UID: \"0b3d7df4-5c38-4257-8039-acdb28194b68\") "
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.351956    1553 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3d7df4-5c38-4257-8039-acdb28194b68-kube-api-access-dphjx" (OuterVolumeSpecName: "kube-api-access-dphjx") pod "0b3d7df4-5c38-4257-8039-acdb28194b68" (UID: "0b3d7df4-5c38-4257-8039-acdb28194b68"). InnerVolumeSpecName "kube-api-access-dphjx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.352064    1553 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3d7df4-5c38-4257-8039-acdb28194b68-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0b3d7df4-5c38-4257-8039-acdb28194b68" (UID: "0b3d7df4-5c38-4257-8039-acdb28194b68"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.450652    1553 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b3d7df4-5c38-4257-8039-acdb28194b68-webhook-cert\") on node \"addons-954584\" DevicePath \"\""
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.450698    1553 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dphjx\" (UniqueName: \"kubernetes.io/projected/0b3d7df4-5c38-4257-8039-acdb28194b68-kube-api-access-dphjx\") on node \"addons-954584\" DevicePath \"\""
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.459555    1553 scope.go:117] "RemoveContainer" containerID="e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee"
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.475004    1553 scope.go:117] "RemoveContainer" containerID="e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee"
	Jan 08 21:14:32 addons-954584 kubelet[1553]: E0108 21:14:32.475300    1553 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee\": container with ID starting with e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee not found: ID does not exist" containerID="e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee"
	Jan 08 21:14:32 addons-954584 kubelet[1553]: I0108 21:14:32.475357    1553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee"} err="failed to get container status \"e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee\": rpc error: code = NotFound desc = could not find container \"e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee\": container with ID starting with e2a8be4ce26ef5191fa0c1d57c4d2a59f5f7c882ad32fc8beb9feacc974478ee not found: ID does not exist"
	Jan 08 21:14:33 addons-954584 kubelet[1553]: I0108 21:14:33.316396    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0b3d7df4-5c38-4257-8039-acdb28194b68" path="/var/lib/kubelet/pods/0b3d7df4-5c38-4257-8039-acdb28194b68/volumes"
	
	
	==> storage-provisioner [93c9fc2eeaa0effdc60278b38a2ba1aba2c885bb5adc93e2e2feaff16eee4982] <==
	I0108 21:10:46.995891       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:10:47.020073       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:10:47.020147       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:10:47.027038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:10:47.027098       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4aaa7b3a-5efc-4a29-b06f-a6296c24182d", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-954584_fffd2630-3335-40bd-a7c1-a2376ed4f864 became leader
	I0108 21:10:47.027225       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-954584_fffd2630-3335-40bd-a7c1-a2376ed4f864!
	I0108 21:10:47.127799       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-954584_fffd2630-3335-40bd-a7c1-a2376ed4f864!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-954584 -n addons-954584
helpers_test.go:261: (dbg) Run:  kubectl --context addons-954584 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (176.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-177638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-177638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.953279458s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-177638 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-177638 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a4a2356e-ca12-4ffc-a290-07b3bdd81f7b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a4a2356e-ca12-4ffc-a290-07b3bdd81f7b] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.003412249s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-177638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0108 21:21:38.049031  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:22:05.731903  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-177638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.114528579s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-177638 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-177638 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.007403818s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-177638 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-177638 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-177638 addons disable ingress --alsologtostderr -v=1: (7.408831342s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-177638
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-177638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7824767e4d6cef979d9fec195215476d64fe07fda28a5ea37ee33bbe6ea403b9",
	        "Created": "2024-01-08T21:18:40.352530963Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196685,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T21:18:40.632889158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:127d4e2273d98a7f5001d818ad9d78fbfe93f6fb3b59e0136dea97a2dd09d9f5",
	        "ResolvConfPath": "/var/lib/docker/containers/7824767e4d6cef979d9fec195215476d64fe07fda28a5ea37ee33bbe6ea403b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7824767e4d6cef979d9fec195215476d64fe07fda28a5ea37ee33bbe6ea403b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7824767e4d6cef979d9fec195215476d64fe07fda28a5ea37ee33bbe6ea403b9/hosts",
	        "LogPath": "/var/lib/docker/containers/7824767e4d6cef979d9fec195215476d64fe07fda28a5ea37ee33bbe6ea403b9/7824767e4d6cef979d9fec195215476d64fe07fda28a5ea37ee33bbe6ea403b9-json.log",
	        "Name": "/ingress-addon-legacy-177638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-177638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-177638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/21905bdf58700ff881d75127e5c32b7579f63f02be9c663d7b50b33df79d5711-init/diff:/var/lib/docker/overlay2/36c91ea73c875a756d19f8a4637b501585f27b26abca7b178ac0d11596ac7a7f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21905bdf58700ff881d75127e5c32b7579f63f02be9c663d7b50b33df79d5711/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21905bdf58700ff881d75127e5c32b7579f63f02be9c663d7b50b33df79d5711/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21905bdf58700ff881d75127e5c32b7579f63f02be9c663d7b50b33df79d5711/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-177638",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-177638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-177638",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-177638",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-177638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50072106d47ba39d3a1bd5994c44d740dbc0ae602d636e93ef48ed15e5e82fea",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/50072106d47b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-177638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7824767e4d6c",
	                        "ingress-addon-legacy-177638"
	                    ],
	                    "NetworkID": "d0e3c0b48ded7f3eb569c0bbfce022f3befe6df7646a3f0cedc6c7541a53fc4c",
	                    "EndpointID": "1e4afc8b0914258ce1ff9c8ede3e7a3bfd15d4585fa08a5570e711d2b3d49011",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-177638 -n ingress-addon-legacy-177638
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-177638 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-177638 logs -n 25: (1.0622048s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-727506                                                            | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-727506 image ls                                                   | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	| image          | functional-727506 image load --daemon                                        | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-727506                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-727506 image ls                                                   | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	| image          | functional-727506 image save                                                 | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-727506                     |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-727506 image rm                                                   | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-727506                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-727506 image ls                                                   | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	| image          | functional-727506 image load                                                 | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-727506 image ls                                                   | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	| image          | functional-727506 image save --daemon                                        | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-727506                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-727506                                                            | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-727506 ssh pgrep                                                  | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-727506                                                            | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC |                     |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-727506                                                            | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-727506                                                            | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-727506 image build -t                                             | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	|                | localhost/my-image:functional-727506                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-727506 image ls                                                   | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	| delete         | -p functional-727506                                                         | functional-727506           | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:18 UTC |
	| start          | -p ingress-addon-legacy-177638                                               | ingress-addon-legacy-177638 | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC | 08 Jan 24 21:19 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-177638                                                  | ingress-addon-legacy-177638 | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:19 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-177638                                                  | ingress-addon-legacy-177638 | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:19 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-177638                                                  | ingress-addon-legacy-177638 | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-177638 ip                                               | ingress-addon-legacy-177638 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| addons         | ingress-addon-legacy-177638                                                  | ingress-addon-legacy-177638 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-177638                                                  | ingress-addon-legacy-177638 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:18:27
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:18:27.870988  196095 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:18:27.871109  196095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:27.871118  196095 out.go:309] Setting ErrFile to fd 2...
	I0108 21:18:27.871122  196095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:27.871329  196095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:18:27.871919  196095 out.go:303] Setting JSON to false
	I0108 21:18:27.873530  196095 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":14460,"bootTime":1704734248,"procs":895,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:18:27.873589  196095 start.go:138] virtualization: kvm guest
	I0108 21:18:27.875736  196095 out.go:177] * [ingress-addon-legacy-177638] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:18:27.877160  196095 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:18:27.877227  196095 notify.go:220] Checking for updates...
	I0108 21:18:27.878559  196095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:18:27.879925  196095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:18:27.881337  196095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:18:27.882672  196095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:18:27.883982  196095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:18:27.885616  196095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:18:27.908467  196095 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:18:27.908588  196095 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:18:27.958836  196095 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 21:18:27.950487493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:18:27.958938  196095 docker.go:295] overlay module found
	I0108 21:18:27.960817  196095 out.go:177] * Using the docker driver based on user configuration
	I0108 21:18:27.962157  196095 start.go:298] selected driver: docker
	I0108 21:18:27.962170  196095 start.go:902] validating driver "docker" against <nil>
	I0108 21:18:27.962181  196095 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:18:27.962933  196095 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:18:28.015586  196095 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 21:18:28.006745839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:18:28.015816  196095 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:18:28.016101  196095 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:18:28.018238  196095 out.go:177] * Using Docker driver with root privileges
	I0108 21:18:28.019639  196095 cni.go:84] Creating CNI manager for ""
	I0108 21:18:28.019660  196095 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 21:18:28.019673  196095 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:18:28.019690  196095 start_flags.go:321] config:
	{Name:ingress-addon-legacy-177638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-177638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:18:28.021220  196095 out.go:177] * Starting control plane node ingress-addon-legacy-177638 in cluster ingress-addon-legacy-177638
	I0108 21:18:28.022472  196095 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 21:18:28.023804  196095 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 21:18:28.024965  196095 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 21:18:28.024992  196095 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 21:18:28.040758  196095 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon, skipping pull
	I0108 21:18:28.040780  196095 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in daemon, skipping load
	I0108 21:18:28.055444  196095 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 21:18:28.055463  196095 cache.go:56] Caching tarball of preloaded images
	I0108 21:18:28.055618  196095 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 21:18:28.057326  196095 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 21:18:28.058656  196095 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:18:28.090760  196095 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 21:18:32.203717  196095 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:18:32.203816  196095 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:18:33.209044  196095 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0108 21:18:33.209401  196095 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/config.json ...
	I0108 21:18:33.209431  196095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/config.json: {Name:mkf7ab67db0494ea481f807654493d942b08a854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:33.209611  196095 cache.go:194] Successfully downloaded all kic artifacts
	I0108 21:18:33.209667  196095 start.go:365] acquiring machines lock for ingress-addon-legacy-177638: {Name:mk0a2829e5e18c02bbddd5615c04ee32dea9da80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:33.209724  196095 start.go:369] acquired machines lock for "ingress-addon-legacy-177638" in 43.264µs
	I0108 21:18:33.209743  196095 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-177638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-177638 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:18:33.209836  196095 start.go:125] createHost starting for "" (driver="docker")
	I0108 21:18:33.213011  196095 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0108 21:18:33.213241  196095 start.go:159] libmachine.API.Create for "ingress-addon-legacy-177638" (driver="docker")
	I0108 21:18:33.213271  196095 client.go:168] LocalClient.Create starting
	I0108 21:18:33.213332  196095 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem
	I0108 21:18:33.213362  196095 main.go:141] libmachine: Decoding PEM data...
	I0108 21:18:33.213378  196095 main.go:141] libmachine: Parsing certificate...
	I0108 21:18:33.213434  196095 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem
	I0108 21:18:33.213464  196095 main.go:141] libmachine: Decoding PEM data...
	I0108 21:18:33.213476  196095 main.go:141] libmachine: Parsing certificate...
	I0108 21:18:33.213758  196095 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-177638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 21:18:33.229325  196095 cli_runner.go:211] docker network inspect ingress-addon-legacy-177638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 21:18:33.229411  196095 network_create.go:281] running [docker network inspect ingress-addon-legacy-177638] to gather additional debugging logs...
	I0108 21:18:33.229432  196095 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-177638
	W0108 21:18:33.244019  196095 cli_runner.go:211] docker network inspect ingress-addon-legacy-177638 returned with exit code 1
	I0108 21:18:33.244051  196095 network_create.go:284] error running [docker network inspect ingress-addon-legacy-177638]: docker network inspect ingress-addon-legacy-177638: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-177638 not found
	I0108 21:18:33.244065  196095 network_create.go:286] output of [docker network inspect ingress-addon-legacy-177638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-177638 not found
	
	** /stderr **
	I0108 21:18:33.244187  196095 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:18:33.260665  196095 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00075f8c0}
	I0108 21:18:33.260716  196095 network_create.go:124] attempt to create docker network ingress-addon-legacy-177638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 21:18:33.260765  196095 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-177638 ingress-addon-legacy-177638
	I0108 21:18:33.313427  196095 network_create.go:108] docker network ingress-addon-legacy-177638 192.168.49.0/24 created
	I0108 21:18:33.313488  196095 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-177638" container
	I0108 21:18:33.313558  196095 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:18:33.327956  196095 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-177638 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-177638 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:18:33.344009  196095 oci.go:103] Successfully created a docker volume ingress-addon-legacy-177638
	I0108 21:18:33.344084  196095 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-177638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-177638 --entrypoint /usr/bin/test -v ingress-addon-legacy-177638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib
	I0108 21:18:35.061134  196095 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-177638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-177638 --entrypoint /usr/bin/test -v ingress-addon-legacy-177638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib: (1.717000058s)
	I0108 21:18:35.061167  196095 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-177638
	I0108 21:18:35.061186  196095 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 21:18:35.061208  196095 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 21:18:35.061265  196095 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-177638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 21:18:40.289720  196095 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-177638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir: (5.228389311s)
	I0108 21:18:40.289760  196095 kic.go:203] duration metric: took 5.228544 seconds to extract preloaded images to volume
	W0108 21:18:40.289914  196095 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:18:40.290017  196095 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:18:40.338598  196095 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-177638 --name ingress-addon-legacy-177638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-177638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-177638 --network ingress-addon-legacy-177638 --ip 192.168.49.2 --volume ingress-addon-legacy-177638:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 21:18:40.640975  196095 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-177638 --format={{.State.Running}}
	I0108 21:18:40.658061  196095 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-177638 --format={{.State.Status}}
	I0108 21:18:40.676237  196095 cli_runner.go:164] Run: docker exec ingress-addon-legacy-177638 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:18:40.742999  196095 oci.go:144] the created container "ingress-addon-legacy-177638" has a running status.
	I0108 21:18:40.743044  196095 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa...
	I0108 21:18:40.977993  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 21:18:40.978043  196095 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:18:40.998155  196095 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-177638 --format={{.State.Status}}
	I0108 21:18:41.017950  196095 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:18:41.017975  196095 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-177638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:18:41.120287  196095 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-177638 --format={{.State.Status}}
	I0108 21:18:41.146744  196095 machine.go:88] provisioning docker machine ...
	I0108 21:18:41.146785  196095 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-177638"
	I0108 21:18:41.146855  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:18:41.169054  196095 main.go:141] libmachine: Using SSH client type: native
	I0108 21:18:41.169683  196095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0108 21:18:41.169716  196095 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-177638 && echo "ingress-addon-legacy-177638" | sudo tee /etc/hostname
	I0108 21:18:41.348292  196095 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-177638
	
	I0108 21:18:41.348419  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:18:41.364807  196095 main.go:141] libmachine: Using SSH client type: native
	I0108 21:18:41.365158  196095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0108 21:18:41.365187  196095 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-177638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-177638/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-177638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:18:41.505349  196095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:18:41.505395  196095 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-150013/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-150013/.minikube}
	I0108 21:18:41.505465  196095 ubuntu.go:177] setting up certificates
	I0108 21:18:41.505476  196095 provision.go:83] configureAuth start
	I0108 21:18:41.505528  196095 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-177638
	I0108 21:18:41.520598  196095 provision.go:138] copyHostCerts
	I0108 21:18:41.520643  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem
	I0108 21:18:41.520676  196095 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem, removing ...
	I0108 21:18:41.520688  196095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem
	I0108 21:18:41.520757  196095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem (1078 bytes)
	I0108 21:18:41.520856  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem
	I0108 21:18:41.520887  196095 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem, removing ...
	I0108 21:18:41.520898  196095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem
	I0108 21:18:41.520935  196095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem (1123 bytes)
	I0108 21:18:41.520997  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem
	I0108 21:18:41.521020  196095 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem, removing ...
	I0108 21:18:41.521028  196095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem
	I0108 21:18:41.521063  196095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem (1675 bytes)
	I0108 21:18:41.521129  196095 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-177638 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-177638]
	I0108 21:18:41.604616  196095 provision.go:172] copyRemoteCerts
	I0108 21:18:41.604679  196095 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:18:41.604717  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:18:41.620648  196095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa Username:docker}
	I0108 21:18:41.721617  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:18:41.721685  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:18:41.742724  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:18:41.742789  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0108 21:18:41.762966  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:18:41.763033  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:18:41.782962  196095 provision.go:86] duration metric: configureAuth took 277.473849ms
	I0108 21:18:41.782990  196095 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:18:41.783209  196095 config.go:182] Loaded profile config "ingress-addon-legacy-177638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 21:18:41.783327  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:18:41.799041  196095 main.go:141] libmachine: Using SSH client type: native
	I0108 21:18:41.799396  196095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0108 21:18:41.799414  196095 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:18:42.041575  196095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:18:42.041602  196095 machine.go:91] provisioned docker machine in 894.834512ms
	I0108 21:18:42.041610  196095 client.go:171] LocalClient.Create took 8.828333893s
	I0108 21:18:42.041633  196095 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-177638" took 8.828394931s
	I0108 21:18:42.041643  196095 start.go:300] post-start starting for "ingress-addon-legacy-177638" (driver="docker")
	I0108 21:18:42.041657  196095 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:18:42.041731  196095 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:18:42.041769  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:18:42.057800  196095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa Username:docker}
	I0108 21:18:42.158090  196095 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:18:42.161055  196095 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:18:42.161102  196095 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:18:42.161119  196095 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:18:42.161130  196095 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 21:18:42.161142  196095 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/addons for local assets ...
	I0108 21:18:42.161189  196095 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/files for local assets ...
	I0108 21:18:42.161256  196095 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> 1566482.pem in /etc/ssl/certs
	I0108 21:18:42.161266  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> /etc/ssl/certs/1566482.pem
	I0108 21:18:42.161370  196095 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:18:42.168795  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem --> /etc/ssl/certs/1566482.pem (1708 bytes)
	I0108 21:18:42.190302  196095 start.go:303] post-start completed in 148.637884ms
	I0108 21:18:42.190666  196095 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-177638
	I0108 21:18:42.206968  196095 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/config.json ...
	I0108 21:18:42.207213  196095 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:18:42.207256  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:18:42.223267  196095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa Username:docker}
	I0108 21:18:42.318082  196095 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:18:42.322112  196095 start.go:128] duration metric: createHost completed in 9.112258914s
	I0108 21:18:42.322136  196095 start.go:83] releasing machines lock for "ingress-addon-legacy-177638", held for 9.112401832s
	I0108 21:18:42.322207  196095 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-177638
	I0108 21:18:42.338567  196095 ssh_runner.go:195] Run: cat /version.json
	I0108 21:18:42.338612  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:18:42.338651  196095 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:18:42.338714  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:18:42.354717  196095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa Username:docker}
	I0108 21:18:42.356222  196095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa Username:docker}
	I0108 21:18:42.444977  196095 ssh_runner.go:195] Run: systemctl --version
	I0108 21:18:42.536023  196095 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:18:42.671280  196095 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:18:42.675422  196095 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:18:42.692664  196095 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 21:18:42.692749  196095 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:18:42.718218  196095 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 21:18:42.718242  196095 start.go:475] detecting cgroup driver to use...
	I0108 21:18:42.718271  196095 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 21:18:42.718308  196095 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:18:42.731735  196095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:18:42.741263  196095 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:18:42.741319  196095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:18:42.753062  196095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:18:42.765075  196095 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:18:42.845845  196095 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:18:42.926847  196095 docker.go:219] disabling docker service ...
	I0108 21:18:42.926912  196095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:18:42.944095  196095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:18:42.954622  196095 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:18:43.030172  196095 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:18:43.105719  196095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:18:43.115661  196095 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:18:43.129250  196095 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 21:18:43.129317  196095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:18:43.137561  196095 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:18:43.137622  196095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:18:43.145992  196095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:18:43.154313  196095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:18:43.162320  196095 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:18:43.169825  196095 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:18:43.176666  196095 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:18:43.183431  196095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:18:43.252250  196095 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:18:43.347038  196095 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:18:43.347098  196095 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:18:43.350349  196095 start.go:543] Will wait 60s for crictl version
	I0108 21:18:43.350404  196095 ssh_runner.go:195] Run: which crictl
	I0108 21:18:43.353301  196095 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:18:43.383526  196095 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 21:18:43.383609  196095 ssh_runner.go:195] Run: crio --version
	I0108 21:18:43.416241  196095 ssh_runner.go:195] Run: crio --version
	I0108 21:18:43.449899  196095 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0108 21:18:43.451290  196095 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-177638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:18:43.466990  196095 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 21:18:43.470389  196095 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:18:43.480090  196095 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 21:18:43.480154  196095 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:18:43.521677  196095 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 21:18:43.521737  196095 ssh_runner.go:195] Run: which lz4
	I0108 21:18:43.524918  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 21:18:43.525000  196095 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:18:43.528033  196095 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:18:43.528058  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0108 21:18:44.430905  196095 crio.go:444] Took 0.905924 seconds to copy over tarball
	I0108 21:18:44.430967  196095 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:18:46.683888  196095 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.252883504s)
	I0108 21:18:46.683930  196095 crio.go:451] Took 2.252996 seconds to extract the tarball
	I0108 21:18:46.683940  196095 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:18:46.753995  196095 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:18:46.784752  196095 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 21:18:46.784780  196095 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 21:18:46.784884  196095 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:18:46.784862  196095 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:18:46.784917  196095 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 21:18:46.784925  196095 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:18:46.784944  196095 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:18:46.784892  196095 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:18:46.784919  196095 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 21:18:46.784855  196095 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:18:46.786044  196095 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:18:46.786065  196095 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:18:46.786070  196095 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 21:18:46.786044  196095 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:18:46.786044  196095 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:18:46.786125  196095 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:18:46.786264  196095 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:18:46.786297  196095 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 21:18:46.972298  196095 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:18:46.982507  196095 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:18:46.985819  196095 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:18:46.993532  196095 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0108 21:18:46.994746  196095 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0108 21:18:47.004131  196095 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:18:47.011876  196095 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:18:47.017033  196095 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0108 21:18:47.017075  196095 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:18:47.017125  196095 ssh_runner.go:195] Run: which crictl
	I0108 21:18:47.035518  196095 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0108 21:18:47.035687  196095 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0108 21:18:47.035737  196095 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:18:47.035773  196095 ssh_runner.go:195] Run: which crictl
	I0108 21:18:47.144098  196095 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0108 21:18:47.144149  196095 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 21:18:47.144158  196095 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0108 21:18:47.144195  196095 ssh_runner.go:195] Run: which crictl
	I0108 21:18:47.144200  196095 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 21:18:47.144210  196095 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0108 21:18:47.144242  196095 ssh_runner.go:195] Run: which crictl
	I0108 21:18:47.144272  196095 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0108 21:18:47.144243  196095 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:18:47.144302  196095 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:18:47.144315  196095 ssh_runner.go:195] Run: which crictl
	I0108 21:18:47.144302  196095 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:18:47.144356  196095 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0108 21:18:47.144380  196095 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:18:47.144382  196095 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:18:47.144404  196095 ssh_runner.go:195] Run: which crictl
	I0108 21:18:47.144413  196095 ssh_runner.go:195] Run: which crictl
	I0108 21:18:47.147835  196095 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 21:18:47.149309  196095 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 21:18:47.149406  196095 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:18:47.242118  196095 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 21:18:47.242148  196095 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 21:18:47.242189  196095 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 21:18:47.242241  196095 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:18:47.242294  196095 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0108 21:18:47.242333  196095 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0108 21:18:47.242375  196095 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 21:18:47.327637  196095 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 21:18:47.332177  196095 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 21:18:47.332241  196095 cache_images.go:92] LoadImages completed in 547.444938ms
	W0108 21:18:47.332346  196095 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0108 21:18:47.332419  196095 ssh_runner.go:195] Run: crio config
	I0108 21:18:47.372037  196095 cni.go:84] Creating CNI manager for ""
	I0108 21:18:47.372059  196095 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 21:18:47.372079  196095 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:18:47.372119  196095 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-177638 NodeName:ingress-addon-legacy-177638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 21:18:47.372283  196095 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-177638"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:18:47.372379  196095 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-177638 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-177638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:18:47.372438  196095 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 21:18:47.380516  196095 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:18:47.380596  196095 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:18:47.388170  196095 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0108 21:18:47.403714  196095 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 21:18:47.419124  196095 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0108 21:18:47.434656  196095 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:18:47.437937  196095 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:18:47.447330  196095 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638 for IP: 192.168.49.2
	I0108 21:18:47.447364  196095 certs.go:190] acquiring lock for shared ca certs: {Name:mk66e763e1c1c88a577c7e7f60df668cab98f63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:47.447504  196095 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key
	I0108 21:18:47.447550  196095 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key
	I0108 21:18:47.447597  196095 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.key
	I0108 21:18:47.447609  196095 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt with IP's: []
	I0108 21:18:47.524400  196095 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt ...
	I0108 21:18:47.524431  196095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: {Name:mkd5d73ac17a23e4ea8ad597489c4ae7f49c4f18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:47.524594  196095 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.key ...
	I0108 21:18:47.524607  196095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.key: {Name:mk1e8a03a406ee71aa60afb3a87e39db2053c1a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:47.524676  196095 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.key.dd3b5fb2
	I0108 21:18:47.524696  196095 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:18:47.688565  196095 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.crt.dd3b5fb2 ...
	I0108 21:18:47.688597  196095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.crt.dd3b5fb2: {Name:mk2cf952f4ee7364fb2f2c279f6659a336267691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:47.688763  196095 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.key.dd3b5fb2 ...
	I0108 21:18:47.688777  196095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.key.dd3b5fb2: {Name:mk76c952eeaa111b6ebabd88eda23150ab290204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:47.688845  196095 certs.go:337] copying /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.crt
	I0108 21:18:47.688929  196095 certs.go:341] copying /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.key
	I0108 21:18:47.688992  196095 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.key
	I0108 21:18:47.689006  196095 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.crt with IP's: []
	I0108 21:18:47.994678  196095 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.crt ...
	I0108 21:18:47.994714  196095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.crt: {Name:mk88c4a6011dd5b228c43b7e8427b2ceff49fa09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:47.994882  196095 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.key ...
	I0108 21:18:47.994896  196095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.key: {Name:mkcf04e45c0fbbebc58f0f38ef943b74c663ef6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:47.994967  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 21:18:47.994984  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 21:18:47.994994  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 21:18:47.995009  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 21:18:47.995022  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:18:47.995032  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:18:47.995044  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:18:47.995056  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:18:47.995115  196095 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem (1338 bytes)
	W0108 21:18:47.995158  196095 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648_empty.pem, impossibly tiny 0 bytes
	I0108 21:18:47.995169  196095 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem (1679 bytes)
	I0108 21:18:47.995191  196095 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:18:47.995218  196095 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:18:47.995248  196095 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem (1675 bytes)
	I0108 21:18:47.995287  196095 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem (1708 bytes)
	I0108 21:18:47.995316  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem -> /usr/share/ca-certificates/156648.pem
	I0108 21:18:47.995332  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> /usr/share/ca-certificates/1566482.pem
	I0108 21:18:47.995344  196095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:18:47.995932  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:18:48.017595  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:18:48.038016  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:18:48.058293  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:18:48.078662  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:18:48.099188  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:18:48.119744  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:18:48.140607  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:18:48.161248  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem --> /usr/share/ca-certificates/156648.pem (1338 bytes)
	I0108 21:18:48.181712  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem --> /usr/share/ca-certificates/1566482.pem (1708 bytes)
	I0108 21:18:48.202070  196095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:18:48.222047  196095 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:18:48.237054  196095 ssh_runner.go:195] Run: openssl version
	I0108 21:18:48.241753  196095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:18:48.249835  196095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:18:48.252860  196095 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:18:48.252908  196095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:18:48.258960  196095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:18:48.267044  196095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156648.pem && ln -fs /usr/share/ca-certificates/156648.pem /etc/ssl/certs/156648.pem"
	I0108 21:18:48.274920  196095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156648.pem
	I0108 21:18:48.277986  196095 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:15 /usr/share/ca-certificates/156648.pem
	I0108 21:18:48.278024  196095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156648.pem
	I0108 21:18:48.284125  196095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156648.pem /etc/ssl/certs/51391683.0"
	I0108 21:18:48.292111  196095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566482.pem && ln -fs /usr/share/ca-certificates/1566482.pem /etc/ssl/certs/1566482.pem"
	I0108 21:18:48.300060  196095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566482.pem
	I0108 21:18:48.303075  196095 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:15 /usr/share/ca-certificates/1566482.pem
	I0108 21:18:48.303130  196095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566482.pem
	I0108 21:18:48.309139  196095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566482.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:18:48.316979  196095 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:18:48.319775  196095 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:18:48.319836  196095 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-177638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-177638 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:18:48.319939  196095 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:18:48.319988  196095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:18:48.352014  196095 cri.go:89] found id: ""
	I0108 21:18:48.352087  196095 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:18:48.360002  196095 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:18:48.367740  196095 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:18:48.367818  196095 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:18:48.375234  196095 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:18:48.375296  196095 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:18:48.416612  196095 kubeadm.go:322] W0108 21:18:48.416061    1376 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 21:18:48.453726  196095 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 21:18:48.519364  196095 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:18:50.982447  196095 kubeadm.go:322] W0108 21:18:50.982143    1376 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 21:18:50.983689  196095 kubeadm.go:322] W0108 21:18:50.983424    1376 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 21:18:58.937005  196095 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 21:18:58.937062  196095 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:18:58.937174  196095 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:18:58.937240  196095 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 21:18:58.937271  196095 kubeadm.go:322] OS: Linux
	I0108 21:18:58.937314  196095 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 21:18:58.937355  196095 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 21:18:58.937409  196095 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 21:18:58.937497  196095 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 21:18:58.937548  196095 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 21:18:58.937600  196095 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 21:18:58.937693  196095 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:18:58.937792  196095 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:18:58.937923  196095 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:18:58.938070  196095 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:18:58.938153  196095 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:18:58.938192  196095 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:18:58.938265  196095 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:18:58.939924  196095 out.go:204]   - Generating certificates and keys ...
	I0108 21:18:58.940006  196095 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:18:58.940061  196095 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:18:58.940121  196095 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:18:58.940168  196095 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:18:58.940244  196095 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:18:58.940311  196095 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:18:58.940399  196095 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:18:58.940535  196095 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-177638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 21:18:58.940580  196095 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:18:58.940681  196095 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-177638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 21:18:58.940734  196095 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:18:58.940785  196095 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:18:58.940829  196095 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:18:58.940875  196095 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:18:58.940931  196095 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:18:58.940975  196095 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:18:58.941027  196095 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:18:58.941077  196095 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:18:58.941141  196095 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:18:58.942436  196095 out.go:204]   - Booting up control plane ...
	I0108 21:18:58.942522  196095 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:18:58.942587  196095 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:18:58.942641  196095 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:18:58.942713  196095 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:18:58.942851  196095 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:18:58.942924  196095 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502035 seconds
	I0108 21:18:58.943017  196095 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:18:58.943127  196095 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:18:58.943180  196095 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:18:58.943293  196095 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-177638 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:18:58.943338  196095 kubeadm.go:322] [bootstrap-token] Using token: wymkt4.8f294adpoad797l4
	I0108 21:18:58.944886  196095 out.go:204]   - Configuring RBAC rules ...
	I0108 21:18:58.944992  196095 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:18:58.945065  196095 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:18:58.945182  196095 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:18:58.945290  196095 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:18:58.945384  196095 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:18:58.945488  196095 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:18:58.945635  196095 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:18:58.945708  196095 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:18:58.945775  196095 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:18:58.945784  196095 kubeadm.go:322] 
	I0108 21:18:58.945876  196095 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:18:58.945892  196095 kubeadm.go:322] 
	I0108 21:18:58.945998  196095 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:18:58.946008  196095 kubeadm.go:322] 
	I0108 21:18:58.946047  196095 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:18:58.946137  196095 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:18:58.946187  196095 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:18:58.946193  196095 kubeadm.go:322] 
	I0108 21:18:58.946234  196095 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:18:58.946304  196095 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:18:58.946404  196095 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:18:58.946415  196095 kubeadm.go:322] 
	I0108 21:18:58.946483  196095 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:18:58.946548  196095 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:18:58.946554  196095 kubeadm.go:322] 
	I0108 21:18:58.946626  196095 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wymkt4.8f294adpoad797l4 \
	I0108 21:18:58.946716  196095 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 \
	I0108 21:18:58.946737  196095 kubeadm.go:322]     --control-plane 
	I0108 21:18:58.946743  196095 kubeadm.go:322] 
	I0108 21:18:58.946815  196095 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:18:58.946822  196095 kubeadm.go:322] 
	I0108 21:18:58.946889  196095 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wymkt4.8f294adpoad797l4 \
	I0108 21:18:58.946987  196095 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 
	I0108 21:18:58.946998  196095 cni.go:84] Creating CNI manager for ""
	I0108 21:18:58.947004  196095 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 21:18:58.948475  196095 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:18:58.949938  196095 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:18:58.953644  196095 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0108 21:18:58.953661  196095 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:18:58.969144  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:18:59.365779  196095 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:18:59.365876  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:59.365876  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=ingress-addon-legacy-177638 minikube.k8s.io/updated_at=2024_01_08T21_18_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:59.372600  196095 ops.go:34] apiserver oom_adj: -16
	I0108 21:18:59.517983  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:00.018882  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:00.518660  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:01.018146  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:01.518682  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:02.018524  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:02.518661  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:03.018646  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:03.518891  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:04.019104  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:04.518515  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:05.018503  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:05.518015  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:06.018820  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:06.518720  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:07.018930  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:07.518087  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:08.018900  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:08.518399  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:09.018682  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:09.518993  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:10.018665  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:10.518232  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:11.018655  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:11.518692  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:12.018034  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:12.518098  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:13.018148  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:13.518488  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:14.018497  196095 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:14.082480  196095 kubeadm.go:1088] duration metric: took 14.716675808s to wait for elevateKubeSystemPrivileges.
	I0108 21:19:14.082521  196095 kubeadm.go:406] StartCluster complete in 25.762694398s
	I0108 21:19:14.082548  196095 settings.go:142] acquiring lock: {Name:mka49c6122422560714ade880e41fa20698ed59b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:14.082680  196095 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:19:14.083516  196095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/kubeconfig: {Name:mk7bacc6ac7c9afd0d9363f33909f58b6056dc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:14.083757  196095 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:19:14.083771  196095 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:19:14.083846  196095 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-177638"
	I0108 21:19:14.083857  196095 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-177638"
	I0108 21:19:14.083872  196095 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-177638"
	I0108 21:19:14.083874  196095 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-177638"
	I0108 21:19:14.083941  196095 host.go:66] Checking if "ingress-addon-legacy-177638" exists ...
	I0108 21:19:14.083982  196095 config.go:182] Loaded profile config "ingress-addon-legacy-177638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 21:19:14.084279  196095 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-177638 --format={{.State.Status}}
	I0108 21:19:14.084497  196095 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-177638 --format={{.State.Status}}
	I0108 21:19:14.084421  196095 kapi.go:59] client config for ingress-addon-legacy-177638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.key", CAFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:19:14.085563  196095 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:19:14.111996  196095 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:19:14.113469  196095 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:19:14.113495  196095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:19:14.113560  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:19:14.115954  196095 kapi.go:59] client config for ingress-addon-legacy-177638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.key", CAFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:19:14.116255  196095 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-177638"
	I0108 21:19:14.116293  196095 host.go:66] Checking if "ingress-addon-legacy-177638" exists ...
	I0108 21:19:14.116769  196095 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-177638 --format={{.State.Status}}
	I0108 21:19:14.136157  196095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa Username:docker}
	I0108 21:19:14.139070  196095 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:19:14.139101  196095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:19:14.139161  196095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-177638
	I0108 21:19:14.162693  196095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/ingress-addon-legacy-177638/id_rsa Username:docker}
	I0108 21:19:14.176735  196095 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:19:14.327579  196095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:19:14.332598  196095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:19:14.530129  196095 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 21:19:14.615603  196095 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-177638" context rescaled to 1 replicas
	I0108 21:19:14.615661  196095 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:19:14.618011  196095 out.go:177] * Verifying Kubernetes components...
	I0108 21:19:14.619554  196095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:19:14.743232  196095 kapi.go:59] client config for ingress-addon-legacy-177638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.key", CAFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:19:14.743629  196095 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-177638" to be "Ready" ...
	I0108 21:19:14.749329  196095 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:19:14.751216  196095 addons.go:508] enable addons completed in 667.446443ms: enabled=[storage-provisioner default-storageclass]
	I0108 21:19:16.747512  196095 node_ready.go:58] node "ingress-addon-legacy-177638" has status "Ready":"False"
	I0108 21:19:19.265780  196095 node_ready.go:58] node "ingress-addon-legacy-177638" has status "Ready":"False"
	I0108 21:19:19.746902  196095 node_ready.go:49] node "ingress-addon-legacy-177638" has status "Ready":"True"
	I0108 21:19:19.746929  196095 node_ready.go:38] duration metric: took 5.003264359s waiting for node "ingress-addon-legacy-177638" to be "Ready" ...
	I0108 21:19:19.746940  196095 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:19:19.753384  196095 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-br5m8" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:21.757134  196095 pod_ready.go:102] pod "coredns-66bff467f8-br5m8" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-08 21:19:13 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0108 21:19:24.259502  196095 pod_ready.go:102] pod "coredns-66bff467f8-br5m8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:26.758372  196095 pod_ready.go:92] pod "coredns-66bff467f8-br5m8" in "kube-system" namespace has status "Ready":"True"
	I0108 21:19:26.758400  196095 pod_ready.go:81] duration metric: took 7.004987819s waiting for pod "coredns-66bff467f8-br5m8" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.758412  196095 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-177638" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.762342  196095 pod_ready.go:92] pod "etcd-ingress-addon-legacy-177638" in "kube-system" namespace has status "Ready":"True"
	I0108 21:19:26.762364  196095 pod_ready.go:81] duration metric: took 3.944976ms waiting for pod "etcd-ingress-addon-legacy-177638" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.762379  196095 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-177638" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.766203  196095 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-177638" in "kube-system" namespace has status "Ready":"True"
	I0108 21:19:26.766228  196095 pod_ready.go:81] duration metric: took 3.841968ms waiting for pod "kube-apiserver-ingress-addon-legacy-177638" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.766240  196095 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-177638" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.769751  196095 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-177638" in "kube-system" namespace has status "Ready":"True"
	I0108 21:19:26.769772  196095 pod_ready.go:81] duration metric: took 3.523246ms waiting for pod "kube-controller-manager-ingress-addon-legacy-177638" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.769784  196095 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-75288" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.773386  196095 pod_ready.go:92] pod "kube-proxy-75288" in "kube-system" namespace has status "Ready":"True"
	I0108 21:19:26.773408  196095 pod_ready.go:81] duration metric: took 3.612784ms waiting for pod "kube-proxy-75288" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.773419  196095 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-177638" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:26.953694  196095 request.go:629] Waited for 180.194428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-177638
	I0108 21:19:27.154598  196095 request.go:629] Waited for 198.220114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-177638
	I0108 21:19:27.157175  196095 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-177638" in "kube-system" namespace has status "Ready":"True"
	I0108 21:19:27.157202  196095 pod_ready.go:81] duration metric: took 383.770955ms waiting for pod "kube-scheduler-ingress-addon-legacy-177638" in "kube-system" namespace to be "Ready" ...
	I0108 21:19:27.157213  196095 pod_ready.go:38] duration metric: took 7.410246672s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:19:27.157230  196095 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:19:27.157289  196095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:19:27.167693  196095 api_server.go:72] duration metric: took 12.551984447s to wait for apiserver process to appear ...
	I0108 21:19:27.167725  196095 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:19:27.167741  196095 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 21:19:27.172264  196095 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 21:19:27.172988  196095 api_server.go:141] control plane version: v1.18.20
	I0108 21:19:27.173009  196095 api_server.go:131] duration metric: took 5.27898ms to wait for apiserver health ...
	I0108 21:19:27.173018  196095 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:19:27.354413  196095 request.go:629] Waited for 181.313809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 21:19:27.359708  196095 system_pods.go:59] 8 kube-system pods found
	I0108 21:19:27.359742  196095 system_pods.go:61] "coredns-66bff467f8-br5m8" [ac465dd7-43cd-4085-bc1d-2b765bdd6b44] Running
	I0108 21:19:27.359749  196095 system_pods.go:61] "etcd-ingress-addon-legacy-177638" [3cbc5528-477b-4519-8358-bafb28638e8f] Running
	I0108 21:19:27.359753  196095 system_pods.go:61] "kindnet-mxspd" [318e65d5-9659-40f6-9d8e-f2bc2bb5660a] Running
	I0108 21:19:27.359759  196095 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-177638" [40c71b7c-64dd-4621-8be7-1a8d8b8d0d92] Running
	I0108 21:19:27.359764  196095 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-177638" [f6e122cb-26e5-435f-9e58-f7aaf228391c] Running
	I0108 21:19:27.359767  196095 system_pods.go:61] "kube-proxy-75288" [7e3eebdf-ab4e-438f-9bbe-08416932e9d2] Running
	I0108 21:19:27.359771  196095 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-177638" [683aa67e-a8d2-4e83-9b8a-82218017ea26] Running
	I0108 21:19:27.359775  196095 system_pods.go:61] "storage-provisioner" [92a5faaf-701c-4e66-a966-efe16d2f7850] Running
	I0108 21:19:27.359789  196095 system_pods.go:74] duration metric: took 186.765276ms to wait for pod list to return data ...
	I0108 21:19:27.359799  196095 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:19:27.554214  196095 request.go:629] Waited for 194.322036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:19:27.556457  196095 default_sa.go:45] found service account: "default"
	I0108 21:19:27.556482  196095 default_sa.go:55] duration metric: took 196.676137ms for default service account to be created ...
	I0108 21:19:27.556490  196095 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:19:27.754635  196095 request.go:629] Waited for 198.073992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 21:19:27.760014  196095 system_pods.go:86] 8 kube-system pods found
	I0108 21:19:27.760036  196095 system_pods.go:89] "coredns-66bff467f8-br5m8" [ac465dd7-43cd-4085-bc1d-2b765bdd6b44] Running
	I0108 21:19:27.760042  196095 system_pods.go:89] "etcd-ingress-addon-legacy-177638" [3cbc5528-477b-4519-8358-bafb28638e8f] Running
	I0108 21:19:27.760048  196095 system_pods.go:89] "kindnet-mxspd" [318e65d5-9659-40f6-9d8e-f2bc2bb5660a] Running
	I0108 21:19:27.760052  196095 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-177638" [40c71b7c-64dd-4621-8be7-1a8d8b8d0d92] Running
	I0108 21:19:27.760057  196095 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-177638" [f6e122cb-26e5-435f-9e58-f7aaf228391c] Running
	I0108 21:19:27.760063  196095 system_pods.go:89] "kube-proxy-75288" [7e3eebdf-ab4e-438f-9bbe-08416932e9d2] Running
	I0108 21:19:27.760069  196095 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-177638" [683aa67e-a8d2-4e83-9b8a-82218017ea26] Running
	I0108 21:19:27.760079  196095 system_pods.go:89] "storage-provisioner" [92a5faaf-701c-4e66-a966-efe16d2f7850] Running
	I0108 21:19:27.760088  196095 system_pods.go:126] duration metric: took 203.591602ms to wait for k8s-apps to be running ...
	I0108 21:19:27.760101  196095 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:19:27.760144  196095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:19:27.770829  196095 system_svc.go:56] duration metric: took 10.721217ms WaitForService to wait for kubelet.
	I0108 21:19:27.770857  196095 kubeadm.go:581] duration metric: took 13.155154151s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:19:27.770882  196095 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:19:27.954391  196095 request.go:629] Waited for 183.418314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0108 21:19:27.956967  196095 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:19:27.957001  196095 node_conditions.go:123] node cpu capacity is 8
	I0108 21:19:27.957017  196095 node_conditions.go:105] duration metric: took 186.129682ms to run NodePressure ...
	I0108 21:19:27.957031  196095 start.go:228] waiting for startup goroutines ...
	I0108 21:19:27.957065  196095 start.go:233] waiting for cluster config update ...
	I0108 21:19:27.957083  196095 start.go:242] writing updated cluster config ...
	I0108 21:19:27.957379  196095 ssh_runner.go:195] Run: rm -f paused
	I0108 21:19:28.004040  196095 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 21:19:28.006073  196095 out.go:177] 
	W0108 21:19:28.007530  196095 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 21:19:28.008840  196095 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 21:19:28.010153  196095 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-177638" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 21:22:11 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:11.954898995Z" level=info msg="Created container 6bd293166d9704d5c6e10808a1e8b13cce02983c30be5ec4f7ed473897383576: default/hello-world-app-5f5d8b66bb-lfh4k/hello-world-app" id=8f5c6ca9-deae-43f6-ba8a-34075e90585b name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 08 21:22:11 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:11.955465458Z" level=info msg="Starting container: 6bd293166d9704d5c6e10808a1e8b13cce02983c30be5ec4f7ed473897383576" id=4c23134a-13c9-4d56-9b46-2c9e9babe220 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jan 08 21:22:11 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:11.963768405Z" level=info msg="Started container" PID=4841 containerID=6bd293166d9704d5c6e10808a1e8b13cce02983c30be5ec4f7ed473897383576 description=default/hello-world-app-5f5d8b66bb-lfh4k/hello-world-app id=4c23134a-13c9-4d56-9b46-2c9e9babe220 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=46bb15f27e99e0267378a31a44c25814880b66b77137572b65cc1aae386f41de
	Jan 08 21:22:13 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:13.224681777Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=9b37a55c-7ad2-49f7-8cb9-ac2b93b593bf name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 21:22:26 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:26.234313949Z" level=info msg="Stopping pod sandbox: bdc9b0c8814da4d4628ad83d50b1b4b19414cf3db18532660c1c2c03e01e3fdc" id=a853d4b8-e2e8-4bf0-aea7-c7045ad0e7b4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 21:22:26 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:26.235213547Z" level=info msg="Stopped pod sandbox: bdc9b0c8814da4d4628ad83d50b1b4b19414cf3db18532660c1c2c03e01e3fdc" id=a853d4b8-e2e8-4bf0-aea7-c7045ad0e7b4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 21:22:27 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:27.000280495Z" level=info msg="Stopping container: 339d554920f74a8442ee50cadd06e3491c64046153e1ad746f0a0d4421e51b46 (timeout: 2s)" id=cac2129d-36c7-4899-84e7-de340d26bd3c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 21:22:27 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:27.002938353Z" level=info msg="Stopping container: 339d554920f74a8442ee50cadd06e3491c64046153e1ad746f0a0d4421e51b46 (timeout: 2s)" id=2a03a204-af0e-438b-b157-f7ee66f3475f name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 21:22:27 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:27.224346904Z" level=info msg="Stopping pod sandbox: bdc9b0c8814da4d4628ad83d50b1b4b19414cf3db18532660c1c2c03e01e3fdc" id=e470e59b-ea5c-42a6-9c29-8463dea5ba0d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 21:22:27 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:27.224411173Z" level=info msg="Stopped pod sandbox (already stopped): bdc9b0c8814da4d4628ad83d50b1b4b19414cf3db18532660c1c2c03e01e3fdc" id=e470e59b-ea5c-42a6-9c29-8463dea5ba0d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.008482982Z" level=warning msg="Stopping container 339d554920f74a8442ee50cadd06e3491c64046153e1ad746f0a0d4421e51b46 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=cac2129d-36c7-4899-84e7-de340d26bd3c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 21:22:29 ingress-addon-legacy-177638 conmon[3382]: conmon 339d554920f74a8442ee <ninfo>: container 3394 exited with status 137
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.153967576Z" level=info msg="Stopped container 339d554920f74a8442ee50cadd06e3491c64046153e1ad746f0a0d4421e51b46: ingress-nginx/ingress-nginx-controller-7fcf777cb7-qzqlc/controller" id=2a03a204-af0e-438b-b157-f7ee66f3475f name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.154027038Z" level=info msg="Stopped container 339d554920f74a8442ee50cadd06e3491c64046153e1ad746f0a0d4421e51b46: ingress-nginx/ingress-nginx-controller-7fcf777cb7-qzqlc/controller" id=cac2129d-36c7-4899-84e7-de340d26bd3c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.154566959Z" level=info msg="Stopping pod sandbox: fb568f06b2bb6b19348524f5a5fac0940274add16d1c961a04a3e15a6a9cbeaf" id=44bb647b-d449-421f-869e-75f97b8f375c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.154585859Z" level=info msg="Stopping pod sandbox: fb568f06b2bb6b19348524f5a5fac0940274add16d1c961a04a3e15a6a9cbeaf" id=14565f05-6e18-44a5-bd7e-df060597d50e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.157231958Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-W4KU7QESJ5D7O3NT - [0:0]\n:KUBE-HP-WMQ666APPGRYGATK - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-WMQ666APPGRYGATK\n-X KUBE-HP-W4KU7QESJ5D7O3NT\nCOMMIT\n"
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.158565356Z" level=info msg="Closing host port tcp:80"
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.158598931Z" level=info msg="Closing host port tcp:443"
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.159535650Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.159555130Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.159667355Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-qzqlc Namespace:ingress-nginx ID:fb568f06b2bb6b19348524f5a5fac0940274add16d1c961a04a3e15a6a9cbeaf UID:716e322e-e1a5-4991-ae0e-d0fdb38917ce NetNS:/var/run/netns/0e2e0813-fe1f-47be-a3b6-00924c8aadb8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.159776866Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-qzqlc from CNI network \"kindnet\" (type=ptp)"
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.202929525Z" level=info msg="Stopped pod sandbox: fb568f06b2bb6b19348524f5a5fac0940274add16d1c961a04a3e15a6a9cbeaf" id=44bb647b-d449-421f-869e-75f97b8f375c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 21:22:29 ingress-addon-legacy-177638 crio[960]: time="2024-01-08 21:22:29.203044071Z" level=info msg="Stopped pod sandbox (already stopped): fb568f06b2bb6b19348524f5a5fac0940274add16d1c961a04a3e15a6a9cbeaf" id=14565f05-6e18-44a5-bd7e-df060597d50e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6bd293166d970       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            22 seconds ago      Running             hello-world-app           0                   46bb15f27e99e       hello-world-app-5f5d8b66bb-lfh4k
	cbf8563992e29       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   12685ad756b75       nginx
	339d554920f74       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   fb568f06b2bb6       ingress-nginx-controller-7fcf777cb7-qzqlc
	ac11882e1a2d8       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   bdbc4d18fdf95       ingress-nginx-admission-patch-gdf2p
	0df5e0305d52b       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   68fadb767393e       ingress-nginx-admission-create-7dd2c
	7c7f9eef4589c       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   76201ee20b40d       coredns-66bff467f8-br5m8
	bfbe4205c9ad9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   f0cc722786d82       storage-provisioner
	5110495d98197       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   5321205abd648       kindnet-mxspd
	31d936d272fef       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   99b5642dababa       kube-proxy-75288
	886da7a492f0a       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   c7d19320f89a3       kube-apiserver-ingress-addon-legacy-177638
	c4027273df6a2       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   417488bd6a73b       etcd-ingress-addon-legacy-177638
	29875deffedec       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   c6aa4f6279ea9       kube-controller-manager-ingress-addon-legacy-177638
	37b0e34eb6b15       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   a3d263ea6c010       kube-scheduler-ingress-addon-legacy-177638
	
	
	==> coredns [7c7f9eef4589cfa9e4ce672d16766dfdf8ec01a1f3d32a5841f48791abbf531e] <==
	[INFO] 10.244.0.5:35685 - 12660 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003872635s
	[INFO] 10.244.0.5:46593 - 36975 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00369808s
	[INFO] 10.244.0.5:50605 - 44680 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003887165s
	[INFO] 10.244.0.5:41764 - 40384 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003542557s
	[INFO] 10.244.0.5:43398 - 39533 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004019265s
	[INFO] 10.244.0.5:35685 - 12162 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003806506s
	[INFO] 10.244.0.5:48332 - 43211 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004149047s
	[INFO] 10.244.0.5:33543 - 10385 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004269052s
	[INFO] 10.244.0.5:34315 - 6610 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00414651s
	[INFO] 10.244.0.5:46593 - 54658 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003676743s
	[INFO] 10.244.0.5:48332 - 62621 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003688226s
	[INFO] 10.244.0.5:41764 - 7335 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003782004s
	[INFO] 10.244.0.5:34315 - 21482 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003453464s
	[INFO] 10.244.0.5:48332 - 36278 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062857s
	[INFO] 10.244.0.5:35685 - 17991 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003544856s
	[INFO] 10.244.0.5:33543 - 38048 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00391759s
	[INFO] 10.244.0.5:46593 - 33021 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000158913s
	[INFO] 10.244.0.5:33543 - 1124 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081663s
	[INFO] 10.244.0.5:43398 - 19191 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004207166s
	[INFO] 10.244.0.5:41764 - 46338 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000385333s
	[INFO] 10.244.0.5:34315 - 6361 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000356945s
	[INFO] 10.244.0.5:35685 - 59286 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000277561s
	[INFO] 10.244.0.5:43398 - 19966 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050389s
	[INFO] 10.244.0.5:50605 - 37073 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004519713s
	[INFO] 10.244.0.5:50605 - 40611 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050313s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-177638
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-177638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=ingress-addon-legacy-177638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_18_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:18:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-177638
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:22:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:22:29 +0000   Mon, 08 Jan 2024 21:18:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:22:29 +0000   Mon, 08 Jan 2024 21:18:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:22:29 +0000   Mon, 08 Jan 2024 21:18:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:22:29 +0000   Mon, 08 Jan 2024 21:19:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-177638
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d63832ee8284411baef0786cb6a04a0
	  System UUID:                aa79cadd-06b8-4e53-9c72-b91417aa3989
	  Boot ID:                    b9c55cc6-3d64-43dc-b6f4-c38d0ea8cf14
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-lfh4k                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-br5m8                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m21s
	  kube-system                 etcd-ingress-addon-legacy-177638                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kindnet-mxspd                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m20s
	  kube-system                 kube-apiserver-ingress-addon-legacy-177638             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-177638    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-proxy-75288                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 kube-scheduler-ingress-addon-legacy-177638             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m43s (x5 over 3m43s)  kubelet     Node ingress-addon-legacy-177638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s (x4 over 3m43s)  kubelet     Node ingress-addon-legacy-177638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s (x4 over 3m43s)  kubelet     Node ingress-addon-legacy-177638 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m35s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m35s                  kubelet     Node ingress-addon-legacy-177638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s                  kubelet     Node ingress-addon-legacy-177638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s                  kubelet     Node ingress-addon-legacy-177638 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m20s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m15s                  kubelet     Node ingress-addon-legacy-177638 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004912] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006564] FS-Cache: N-cookie d=00000000c3b3813c{9p.inode} n=0000000013361d1e
	[  +0.007373] FS-Cache: N-key=[8] 'eaa40f0200000000'
	[  +0.298020] FS-Cache: Duplicate cookie detected
	[  +0.004718] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006750] FS-Cache: O-cookie d=00000000c3b3813c{9p.inode} n=00000000cef9bdc0
	[  +0.007344] FS-Cache: O-key=[8] 'f6a40f0200000000'
	[  +0.004918] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006650] FS-Cache: N-cookie d=00000000c3b3813c{9p.inode} n=000000000a10e56e
	[  +0.008810] FS-Cache: N-key=[8] 'f6a40f0200000000'
	[ +16.807296] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 8 21:19] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[  +1.020088] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[Jan 8 21:20] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[  +4.191572] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[  +8.195208] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[ +16.122464] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[Jan 8 21:21] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	
	
	==> etcd [c4027273df6a2e4ebf57a9e9081acf57253607316af917c8bca09b8254315960] <==
	raft2024/01/08 21:18:52 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 21:18:52.314458 W | auth: simple token is not cryptographically signed
	2024-01-08 21:18:52.318165 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-08 21:18:52.318577 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/08 21:18:52 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 21:18:52.319132 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-08 21:18:52.321299 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-08 21:18:52.321480 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 21:18:52.321959 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/08 21:18:53 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/08 21:18:53 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/08 21:18:53 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/08 21:18:53 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/08 21:18:53 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-08 21:18:53.052598 I | etcdserver: published {Name:ingress-addon-legacy-177638 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-08 21:18:53.052624 I | embed: ready to serve client requests
	2024-01-08 21:18:53.052642 I | embed: ready to serve client requests
	2024-01-08 21:18:53.052657 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 21:18:53.053340 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 21:18:53.053409 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-08 21:18:53.054138 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-08 21:18:53.054217 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 21:19:20.386719 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:3718" took too long (128.031999ms) to execute
	2024-01-08 21:19:20.386773 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (127.590425ms) to execute
	2024-01-08 21:20:08.660671 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:2 size:8316" took too long (114.628614ms) to execute
	
	
	==> kernel <==
	 21:22:34 up  4:05,  0 users,  load average: 0.44, 0.86, 1.51
	Linux ingress-addon-legacy-177638 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [5110495d98197ad19ad70824ef256d2ab04a0fcce82e9abd52a47a6755ccea01] <==
	I0108 21:20:27.988570       1 main.go:227] handling current node
	I0108 21:20:37.991649       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:20:37.991675       1 main.go:227] handling current node
	I0108 21:20:48.002529       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:20:48.002564       1 main.go:227] handling current node
	I0108 21:20:58.006593       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:20:58.006619       1 main.go:227] handling current node
	I0108 21:21:08.018868       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:21:08.018894       1 main.go:227] handling current node
	I0108 21:21:18.022854       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:21:18.022879       1 main.go:227] handling current node
	I0108 21:21:28.035416       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:21:28.035443       1 main.go:227] handling current node
	I0108 21:21:38.039293       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:21:38.039319       1 main.go:227] handling current node
	I0108 21:21:48.051707       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:21:48.051739       1 main.go:227] handling current node
	I0108 21:21:58.055304       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:21:58.055328       1 main.go:227] handling current node
	I0108 21:22:08.067475       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:22:08.067500       1 main.go:227] handling current node
	I0108 21:22:18.071713       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:22:18.071866       1 main.go:227] handling current node
	I0108 21:22:28.080656       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 21:22:28.080681       1 main.go:227] handling current node
	
	
	==> kube-apiserver [886da7a492f0a3dd25699202c5d72e61ad0e0aca65f952702901f85eb750f7d2] <==
	I0108 21:18:56.213670       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 21:18:56.213670       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:18:56.213670       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0108 21:18:56.213694       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:18:56.213706       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:18:57.095019       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 21:18:57.095043       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:18:57.099541       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 21:18:57.102274       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:18:57.102294       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 21:18:57.362826       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:18:57.390063       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 21:18:57.442240       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0108 21:18:57.443134       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 21:18:57.445797       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:18:58.384107       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0108 21:18:58.815830       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 21:18:58.924893       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 21:18:59.167472       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:19:13.829586       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 21:19:14.418643       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:19:28.673141       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 21:19:50.564601       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0108 21:22:26.821320       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E0108 21:22:27.009454       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [29875deffedec00c2a3b0c5b00f84d326d999583c992a619fb0b761d37ce8047] <==
	I0108 21:19:14.106493       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"76577d87-c2ad-40b3-83a0-203f883ec841", APIVersion:"apps/v1", ResourceVersion:"339", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0108 21:19:14.119310       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"e742ca95-a2ad-452d-b6fa-e71e6cab005b", APIVersion:"apps/v1", ResourceVersion:"340", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-tppr4
	I0108 21:19:14.137630       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	E0108 21:19:14.148120       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0108 21:19:14.258072       1 shared_informer.go:230] Caches are synced for attach detach 
	I0108 21:19:14.343958       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 21:19:14.353402       1 shared_informer.go:230] Caches are synced for stateful set 
	I0108 21:19:14.413641       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0108 21:19:14.413662       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0108 21:19:14.413773       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 21:19:14.425199       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e8bedf17-5a9b-45e0-ad3a-9ce0984895fd", APIVersion:"apps/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-75288
	I0108 21:19:14.432408       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"d8f5de1f-8cc6-4221-8f5d-323ce6b7e0e7", APIVersion:"apps/v1", ResourceVersion:"235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-mxspd
	I0108 21:19:14.437265       1 shared_informer.go:230] Caches are synced for endpoint 
	I0108 21:19:14.438207       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 21:19:14.440284       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 21:19:14.440312       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:19:23.788009       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0108 21:19:28.665093       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"e27970f5-2e2a-4eef-897b-1c946c4abdd4", APIVersion:"apps/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 21:19:28.669987       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"607ebf95-66cb-4583-9920-0d895d4ca300", APIVersion:"apps/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-qzqlc
	I0108 21:19:28.721124       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4db9ed2c-9405-424c-b696-d29dca217831", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-7dd2c
	I0108 21:19:28.735631       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6c4bc7db-9cdd-4743-9672-3131fe1e4392", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-gdf2p
	I0108 21:19:31.285029       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4db9ed2c-9405-424c-b696-d29dca217831", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 21:19:32.286977       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6c4bc7db-9cdd-4743-9672-3131fe1e4392", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 21:22:10.050652       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"3ed2f5aa-24a4-4671-ac3a-fb30716c3ef4", APIVersion:"apps/v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 21:22:10.056226       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"6e126ace-efa7-4498-9188-0e66d6153665", APIVersion:"apps/v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-lfh4k
	
	
	==> kube-proxy [31d936d272fef416d4e034222b2b9be6979ac3ed3fc5a2c5e444a2b098253925] <==
	W0108 21:19:14.910426       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 21:19:14.916451       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0108 21:19:14.916476       1 server_others.go:186] Using iptables Proxier.
	I0108 21:19:14.916706       1 server.go:583] Version: v1.18.20
	I0108 21:19:14.917047       1 config.go:133] Starting endpoints config controller
	I0108 21:19:14.917124       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 21:19:14.917136       1 config.go:315] Starting service config controller
	I0108 21:19:14.917152       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 21:19:15.017321       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0108 21:19:15.017326       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [37b0e34eb6b15ba64a7774cba42459602afb1f3301c425eef429528dea65f6f9] <==
	W0108 21:18:56.129975       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:18:56.129980       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:18:56.140386       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 21:18:56.140411       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 21:18:56.142074       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:18:56.142161       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:18:56.143016       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0108 21:18:56.143090       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0108 21:18:56.144066       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:18:56.216770       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:18:56.217275       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:18:56.217469       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:18:56.217529       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:18:56.217627       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:18:56.217731       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:18:56.217739       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:18:56.217843       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:56.217285       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:18:56.217951       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:56.217984       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:18:57.043844       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:18:57.088859       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:18:57.239497       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:18:57.258864       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0108 21:18:58.742367       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 08 21:21:47 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:21:47.225404    1852 pod_workers.go:191] Error syncing pod 9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e ("kube-ingress-dns-minikube_kube-system(9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 21:21:58 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:21:58.225126    1852 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 21:21:58 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:21:58.225172    1852 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 21:21:58 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:21:58.225219    1852 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 21:21:58 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:21:58.225246    1852 pod_workers.go:191] Error syncing pod 9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e ("kube-ingress-dns-minikube_kube-system(9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 21:22:10 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:10.061670    1852 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 21:22:10 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:10.229775    1852 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-crhsq" (UniqueName: "kubernetes.io/secret/245f2eee-d5d3-4df2-bf96-0f7dc0ef9081-default-token-crhsq") pod "hello-world-app-5f5d8b66bb-lfh4k" (UID: "245f2eee-d5d3-4df2-bf96-0f7dc0ef9081")
	Jan 08 21:22:10 ingress-addon-legacy-177638 kubelet[1852]: W0108 21:22:10.410814    1852 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/7824767e4d6cef979d9fec195215476d64fe07fda28a5ea37ee33bbe6ea403b9/crio-46bb15f27e99e0267378a31a44c25814880b66b77137572b65cc1aae386f41de WatchSource:0}: Error finding container 46bb15f27e99e0267378a31a44c25814880b66b77137572b65cc1aae386f41de: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000bd8fa0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jan 08 21:22:13 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:22:13.225036    1852 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 21:22:13 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:22:13.225080    1852 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 21:22:13 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:22:13.225140    1852 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 21:22:13 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:22:13.225179    1852 pod_workers.go:191] Error syncing pod 9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e ("kube-ingress-dns-minikube_kube-system(9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 21:22:25 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:25.865530    1852 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-ctvgp" (UniqueName: "kubernetes.io/secret/9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e-minikube-ingress-dns-token-ctvgp") pod "9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e" (UID: "9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e")
	Jan 08 21:22:25 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:25.867678    1852 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e-minikube-ingress-dns-token-ctvgp" (OuterVolumeSpecName: "minikube-ingress-dns-token-ctvgp") pod "9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e" (UID: "9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e"). InnerVolumeSpecName "minikube-ingress-dns-token-ctvgp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 21:22:25 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:25.965866    1852 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-ctvgp" (UniqueName: "kubernetes.io/secret/9b3948dd-a364-4c9c-9a8a-e6d3f4d3989e-minikube-ingress-dns-token-ctvgp") on node "ingress-addon-legacy-177638" DevicePath ""
	Jan 08 21:22:26 ingress-addon-legacy-177638 kubelet[1852]: W0108 21:22:26.638570    1852 pod_container_deletor.go:77] Container "bdc9b0c8814da4d4628ad83d50b1b4b19414cf3db18532660c1c2c03e01e3fdc" not found in pod's containers
	Jan 08 21:22:27 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:22:27.001760    1852 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qzqlc.17a87c229925ab8d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qzqlc", UID:"716e322e-e1a5-4991-ae0e-d0fdb38917ce", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-177638"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f37c4bb98778d, ext:208216107111, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f37c4bb98778d, ext:208216107111, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-qzqlc.17a87c229925ab8d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 21:22:27 ingress-addon-legacy-177638 kubelet[1852]: E0108 21:22:27.005610    1852 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qzqlc.17a87c229925ab8d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qzqlc", UID:"716e322e-e1a5-4991-ae0e-d0fdb38917ce", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-177638"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f37c4bb98778d, ext:208216107111, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f37c4c02821de, ext:208218889412, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-qzqlc.17a87c229925ab8d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 21:22:29 ingress-addon-legacy-177638 kubelet[1852]: W0108 21:22:29.644330    1852 pod_container_deletor.go:77] Container "fb568f06b2bb6b19348524f5a5fac0940274add16d1c961a04a3e15a6a9cbeaf" not found in pod's containers
	Jan 08 21:22:29 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:29.920795    1852 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/716e322e-e1a5-4991-ae0e-d0fdb38917ce-webhook-cert") pod "716e322e-e1a5-4991-ae0e-d0fdb38917ce" (UID: "716e322e-e1a5-4991-ae0e-d0fdb38917ce")
	Jan 08 21:22:29 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:29.920843    1852 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-44nqs" (UniqueName: "kubernetes.io/secret/716e322e-e1a5-4991-ae0e-d0fdb38917ce-ingress-nginx-token-44nqs") pod "716e322e-e1a5-4991-ae0e-d0fdb38917ce" (UID: "716e322e-e1a5-4991-ae0e-d0fdb38917ce")
	Jan 08 21:22:29 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:29.922777    1852 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716e322e-e1a5-4991-ae0e-d0fdb38917ce-ingress-nginx-token-44nqs" (OuterVolumeSpecName: "ingress-nginx-token-44nqs") pod "716e322e-e1a5-4991-ae0e-d0fdb38917ce" (UID: "716e322e-e1a5-4991-ae0e-d0fdb38917ce"). InnerVolumeSpecName "ingress-nginx-token-44nqs". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 21:22:29 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:29.922876    1852 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716e322e-e1a5-4991-ae0e-d0fdb38917ce-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "716e322e-e1a5-4991-ae0e-d0fdb38917ce" (UID: "716e322e-e1a5-4991-ae0e-d0fdb38917ce"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 21:22:30 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:30.021134    1852 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/716e322e-e1a5-4991-ae0e-d0fdb38917ce-webhook-cert") on node "ingress-addon-legacy-177638" DevicePath ""
	Jan 08 21:22:30 ingress-addon-legacy-177638 kubelet[1852]: I0108 21:22:30.021172    1852 reconciler.go:319] Volume detached for volume "ingress-nginx-token-44nqs" (UniqueName: "kubernetes.io/secret/716e322e-e1a5-4991-ae0e-d0fdb38917ce-ingress-nginx-token-44nqs") on node "ingress-addon-legacy-177638" DevicePath ""
	
	
	==> storage-provisioner [bfbe4205c9ad9aa4e653ac7c2d23a20ac9d89121b220c24973931e498a2648ec] <==
	I0108 21:19:20.135087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:19:20.142090       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:19:20.142127       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:19:20.257928       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:19:20.258200       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-177638_605950d7-87b1-4c8a-bf83-125c90d497b7!
	I0108 21:19:20.258603       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d9878a25-71ef-4e5f-b5ab-f767e14bd3a1", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-177638_605950d7-87b1-4c8a-bf83-125c90d497b7 became leader
	I0108 21:19:20.359047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-177638_605950d7-87b1-4c8a-bf83-125c90d497b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-177638 -n ingress-addon-legacy-177638
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-177638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (176.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-dmq2z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-dmq2z -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-dmq2z -- sh -c "ping -c 1 192.168.58.1": exit status 1 (179.318067ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-dmq2z): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-hncds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-hncds -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-hncds -- sh -c "ping -c 1 192.168.58.1": exit status 1 (179.403632ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-hncds): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-379549
helpers_test.go:235: (dbg) docker inspect multinode-379549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac",
	        "Created": "2024-01-08T21:27:36.886579957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241366,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T21:27:37.159183835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:127d4e2273d98a7f5001d818ad9d78fbfe93f6fb3b59e0136dea97a2dd09d9f5",
	        "ResolvConfPath": "/var/lib/docker/containers/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/hosts",
	        "LogPath": "/var/lib/docker/containers/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac-json.log",
	        "Name": "/multinode-379549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-379549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-379549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fe3ee08eecae99c9b6d9a702b7c853985f13a681bd0931bbccad1082bc3b6b83-init/diff:/var/lib/docker/overlay2/36c91ea73c875a756d19f8a4637b501585f27b26abca7b178ac0d11596ac7a7f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe3ee08eecae99c9b6d9a702b7c853985f13a681bd0931bbccad1082bc3b6b83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe3ee08eecae99c9b6d9a702b7c853985f13a681bd0931bbccad1082bc3b6b83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe3ee08eecae99c9b6d9a702b7c853985f13a681bd0931bbccad1082bc3b6b83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-379549",
	                "Source": "/var/lib/docker/volumes/multinode-379549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-379549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-379549",
	                "name.minikube.sigs.k8s.io": "multinode-379549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e4357dcd2dbd7cf59c3234304c9b3709ea792621553dc420228ce8ebabaea7d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0e4357dcd2db",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-379549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6363bf6a0fa1",
	                        "multinode-379549"
	                    ],
	                    "NetworkID": "95d41916538481c54d14dc252289cb824751212abfb3539444580157d53b022b",
	                    "EndpointID": "082b68dab1b525f0ea3c9e267672152ea1c2674246544fdeb0fcf87a363e2c00",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-379549 -n multinode-379549
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-379549 logs -n 25: (1.189373364s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-991082                           | mount-start-2-991082 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-991082 ssh -- ls                    | mount-start-2-991082 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-971473                           | mount-start-1-971473 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-991082 ssh -- ls                    | mount-start-2-991082 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-991082                           | mount-start-2-991082 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	| start   | -p mount-start-2-991082                           | mount-start-2-991082 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	| ssh     | mount-start-2-991082 ssh -- ls                    | mount-start-2-991082 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-991082                           | mount-start-2-991082 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	| delete  | -p mount-start-1-971473                           | mount-start-1-971473 | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	| start   | -p multinode-379549                               | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:28 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- apply -f                   | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- rollout                    | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:29 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- get pods -o                | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- get pods -o                | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | busybox-5bc68d56bd-dmq2z --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | busybox-5bc68d56bd-hncds --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | busybox-5bc68d56bd-dmq2z --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | busybox-5bc68d56bd-hncds --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | busybox-5bc68d56bd-dmq2z -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | busybox-5bc68d56bd-hncds -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- get pods -o                | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | busybox-5bc68d56bd-dmq2z                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	|         | busybox-5bc68d56bd-dmq2z -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | busybox-5bc68d56bd-hncds                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-379549 -- exec                       | multinode-379549     | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	|         | busybox-5bc68d56bd-hncds -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:27:30
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:27:30.976488  240774 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:27:30.976758  240774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:27:30.976769  240774 out.go:309] Setting ErrFile to fd 2...
	I0108 21:27:30.976778  240774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:27:30.976997  240774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:27:30.977639  240774 out.go:303] Setting JSON to false
	I0108 21:27:30.979067  240774 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":15003,"bootTime":1704734248,"procs":750,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:27:30.979131  240774 start.go:138] virtualization: kvm guest
	I0108 21:27:30.981417  240774 out.go:177] * [multinode-379549] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:27:30.983023  240774 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:27:30.983093  240774 notify.go:220] Checking for updates...
	I0108 21:27:30.984596  240774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:27:30.986143  240774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:27:30.987471  240774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:27:30.988806  240774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:27:30.990125  240774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:27:30.991724  240774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:27:31.013967  240774 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:27:31.014112  240774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:27:31.063937  240774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 21:27:31.055839184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:27:31.064040  240774 docker.go:295] overlay module found
	I0108 21:27:31.066003  240774 out.go:177] * Using the docker driver based on user configuration
	I0108 21:27:31.067276  240774 start.go:298] selected driver: docker
	I0108 21:27:31.067284  240774 start.go:902] validating driver "docker" against <nil>
	I0108 21:27:31.067294  240774 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:27:31.068031  240774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:27:31.117804  240774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 21:27:31.109821473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:27:31.117963  240774 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:27:31.118157  240774 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:27:31.120111  240774 out.go:177] * Using Docker driver with root privileges
	I0108 21:27:31.121513  240774 cni.go:84] Creating CNI manager for ""
	I0108 21:27:31.121529  240774 cni.go:136] 0 nodes found, recommending kindnet
	I0108 21:27:31.121539  240774 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:27:31.121551  240774 start_flags.go:321] config:
	{Name:multinode-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-379549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:27:31.123059  240774 out.go:177] * Starting control plane node multinode-379549 in cluster multinode-379549
	I0108 21:27:31.124417  240774 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 21:27:31.125714  240774 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 21:27:31.126905  240774 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:27:31.126929  240774 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 21:27:31.126943  240774 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:27:31.126952  240774 cache.go:56] Caching tarball of preloaded images
	I0108 21:27:31.127045  240774 preload.go:174] Found /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:27:31.127060  240774 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:27:31.127438  240774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/config.json ...
	I0108 21:27:31.127466  240774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/config.json: {Name:mke14ecea01ebb0485075ac2101f9ae614caf23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:31.142043  240774 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon, skipping pull
	I0108 21:27:31.142065  240774 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in daemon, skipping load
	I0108 21:27:31.142084  240774 cache.go:194] Successfully downloaded all kic artifacts
	I0108 21:27:31.142118  240774 start.go:365] acquiring machines lock for multinode-379549: {Name:mka654ba0c3e10df4abf5972d1e5abf50fb3c267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:27:31.142218  240774 start.go:369] acquired machines lock for "multinode-379549" in 75.348µs
	I0108 21:27:31.142239  240774 start.go:93] Provisioning new machine with config: &{Name:multinode-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-379549 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:27:31.142337  240774 start.go:125] createHost starting for "" (driver="docker")
	I0108 21:27:31.144398  240774 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 21:27:31.144630  240774 start.go:159] libmachine.API.Create for "multinode-379549" (driver="docker")
	I0108 21:27:31.144659  240774 client.go:168] LocalClient.Create starting
	I0108 21:27:31.144715  240774 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem
	I0108 21:27:31.144746  240774 main.go:141] libmachine: Decoding PEM data...
	I0108 21:27:31.144762  240774 main.go:141] libmachine: Parsing certificate...
	I0108 21:27:31.144810  240774 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem
	I0108 21:27:31.144833  240774 main.go:141] libmachine: Decoding PEM data...
	I0108 21:27:31.144852  240774 main.go:141] libmachine: Parsing certificate...
	I0108 21:27:31.145165  240774 cli_runner.go:164] Run: docker network inspect multinode-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 21:27:31.160668  240774 cli_runner.go:211] docker network inspect multinode-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 21:27:31.160729  240774 network_create.go:281] running [docker network inspect multinode-379549] to gather additional debugging logs...
	I0108 21:27:31.160748  240774 cli_runner.go:164] Run: docker network inspect multinode-379549
	W0108 21:27:31.175286  240774 cli_runner.go:211] docker network inspect multinode-379549 returned with exit code 1
	I0108 21:27:31.175314  240774 network_create.go:284] error running [docker network inspect multinode-379549]: docker network inspect multinode-379549: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-379549 not found
	I0108 21:27:31.175334  240774 network_create.go:286] output of [docker network inspect multinode-379549]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-379549 not found
	
	** /stderr **
	I0108 21:27:31.175407  240774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:27:31.190768  240774 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c42573373d0b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3d:88:14:67} reservation:<nil>}
	I0108 21:27:31.191216  240774 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002989f30}
	I0108 21:27:31.191245  240774 network_create.go:124] attempt to create docker network multinode-379549 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0108 21:27:31.191308  240774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-379549 multinode-379549
	I0108 21:27:31.241233  240774 network_create.go:108] docker network multinode-379549 192.168.58.0/24 created
	I0108 21:27:31.241262  240774 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-379549" container
	I0108 21:27:31.241333  240774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:27:31.256041  240774 cli_runner.go:164] Run: docker volume create multinode-379549 --label name.minikube.sigs.k8s.io=multinode-379549 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:27:31.272367  240774 oci.go:103] Successfully created a docker volume multinode-379549
	I0108 21:27:31.272441  240774 cli_runner.go:164] Run: docker run --rm --name multinode-379549-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-379549 --entrypoint /usr/bin/test -v multinode-379549:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib
	I0108 21:27:31.775578  240774 oci.go:107] Successfully prepared a docker volume multinode-379549
	I0108 21:27:31.775633  240774 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:27:31.775667  240774 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 21:27:31.775735  240774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-379549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 21:27:36.820412  240774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-379549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir: (5.044619933s)
	I0108 21:27:36.820448  240774 kic.go:203] duration metric: took 5.044787 seconds to extract preloaded images to volume
	W0108 21:27:36.820595  240774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:27:36.820685  240774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:27:36.872394  240774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-379549 --name multinode-379549 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-379549 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-379549 --network multinode-379549 --ip 192.168.58.2 --volume multinode-379549:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 21:27:37.166977  240774 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Running}}
	I0108 21:27:37.184825  240774 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Status}}
	I0108 21:27:37.201649  240774 cli_runner.go:164] Run: docker exec multinode-379549 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:27:37.267701  240774 oci.go:144] the created container "multinode-379549" has a running status.
	I0108 21:27:37.267741  240774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa...
	I0108 21:27:37.342243  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 21:27:37.342300  240774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:27:37.363162  240774 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Status}}
	I0108 21:27:37.379146  240774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:27:37.379177  240774 kic_runner.go:114] Args: [docker exec --privileged multinode-379549 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:27:37.459131  240774 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Status}}
	I0108 21:27:37.475818  240774 machine.go:88] provisioning docker machine ...
	I0108 21:27:37.475855  240774 ubuntu.go:169] provisioning hostname "multinode-379549"
	I0108 21:27:37.475938  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:27:37.494053  240774 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:37.494417  240774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0108 21:27:37.494435  240774 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-379549 && echo "multinode-379549" | sudo tee /etc/hostname
	I0108 21:27:37.495015  240774 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51250->127.0.0.1:32847: read: connection reset by peer
	I0108 21:27:40.643445  240774 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-379549
	
	I0108 21:27:40.643533  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:27:40.659232  240774 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:40.659570  240774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0108 21:27:40.659588  240774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-379549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-379549/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-379549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:27:40.797397  240774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:27:40.797459  240774 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-150013/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-150013/.minikube}
	I0108 21:27:40.797488  240774 ubuntu.go:177] setting up certificates
	I0108 21:27:40.797500  240774 provision.go:83] configureAuth start
	I0108 21:27:40.797584  240774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-379549
	I0108 21:27:40.814622  240774 provision.go:138] copyHostCerts
	I0108 21:27:40.814657  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem
	I0108 21:27:40.814686  240774 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem, removing ...
	I0108 21:27:40.814695  240774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem
	I0108 21:27:40.814753  240774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem (1078 bytes)
	I0108 21:27:40.814817  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem
	I0108 21:27:40.814835  240774 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem, removing ...
	I0108 21:27:40.814842  240774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem
	I0108 21:27:40.814864  240774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem (1123 bytes)
	I0108 21:27:40.814901  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem
	I0108 21:27:40.814916  240774 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem, removing ...
	I0108 21:27:40.814925  240774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem
	I0108 21:27:40.814949  240774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem (1675 bytes)
	I0108 21:27:40.815023  240774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem org=jenkins.multinode-379549 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-379549]
	I0108 21:27:40.885684  240774 provision.go:172] copyRemoteCerts
	I0108 21:27:40.885747  240774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:27:40.885786  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:27:40.902410  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:27:40.997661  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:27:40.997724  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:27:41.018138  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:27:41.018189  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 21:27:41.038254  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:27:41.038330  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:27:41.058064  240774 provision.go:86] duration metric: configureAuth took 260.5467ms
	I0108 21:27:41.058130  240774 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:27:41.058304  240774 config.go:182] Loaded profile config "multinode-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:27:41.058422  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:27:41.074539  240774 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:41.074869  240774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0108 21:27:41.074891  240774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:27:41.291064  240774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:27:41.291097  240774 machine.go:91] provisioned docker machine in 3.815254839s
	I0108 21:27:41.291109  240774 client.go:171] LocalClient.Create took 10.146444596s
	I0108 21:27:41.291134  240774 start.go:167] duration metric: libmachine.API.Create for "multinode-379549" took 10.146504449s
	I0108 21:27:41.291145  240774 start.go:300] post-start starting for "multinode-379549" (driver="docker")
	I0108 21:27:41.291162  240774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:27:41.291238  240774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:27:41.291293  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:27:41.306866  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:27:41.401996  240774 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:27:41.404889  240774 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 21:27:41.404906  240774 command_runner.go:130] > NAME="Ubuntu"
	I0108 21:27:41.404912  240774 command_runner.go:130] > VERSION_ID="22.04"
	I0108 21:27:41.404917  240774 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 21:27:41.404922  240774 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 21:27:41.404926  240774 command_runner.go:130] > ID=ubuntu
	I0108 21:27:41.404930  240774 command_runner.go:130] > ID_LIKE=debian
	I0108 21:27:41.404934  240774 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 21:27:41.404939  240774 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 21:27:41.404948  240774 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 21:27:41.404955  240774 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 21:27:41.404961  240774 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 21:27:41.405012  240774 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:27:41.405038  240774 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:27:41.405049  240774 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:27:41.405055  240774 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 21:27:41.405067  240774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/addons for local assets ...
	I0108 21:27:41.405107  240774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/files for local assets ...
	I0108 21:27:41.405191  240774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> 1566482.pem in /etc/ssl/certs
	I0108 21:27:41.405203  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> /etc/ssl/certs/1566482.pem
	I0108 21:27:41.405288  240774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:27:41.412605  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem --> /etc/ssl/certs/1566482.pem (1708 bytes)
	I0108 21:27:41.433597  240774 start.go:303] post-start completed in 142.43409ms
	I0108 21:27:41.433955  240774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-379549
	I0108 21:27:41.450764  240774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/config.json ...
	I0108 21:27:41.450985  240774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:27:41.451025  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:27:41.466819  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:27:41.558031  240774 command_runner.go:130] > 35%!
	(MISSING)I0108 21:27:41.558122  240774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:27:41.561929  240774 command_runner.go:130] > 190G
	I0108 21:27:41.562129  240774 start.go:128] duration metric: createHost completed in 10.419778832s
	I0108 21:27:41.562148  240774 start.go:83] releasing machines lock for "multinode-379549", held for 10.419919683s
	I0108 21:27:41.562202  240774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-379549
	I0108 21:27:41.577576  240774 ssh_runner.go:195] Run: cat /version.json
	I0108 21:27:41.577604  240774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:27:41.577640  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:27:41.577654  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:27:41.593912  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:27:41.594913  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:27:41.771648  240774 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:27:41.773707  240774 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1703790982-17866", "minikube_version": "v1.32.0", "commit": "1553e31a427d433b292e8b2292123d8c426f06f5"}
	I0108 21:27:41.773852  240774 ssh_runner.go:195] Run: systemctl --version
	I0108 21:27:41.777670  240774 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0108 21:27:41.777703  240774 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0108 21:27:41.777925  240774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:27:41.913585  240774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:27:41.917430  240774 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 21:27:41.917474  240774 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 21:27:41.917485  240774 command_runner.go:130] > Device: 37h/55d	Inode: 556131      Links: 1
	I0108 21:27:41.917496  240774 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:27:41.917502  240774 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 21:27:41.917507  240774 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 21:27:41.917512  240774 command_runner.go:130] > Change: 2024-01-08 21:09:21.185659885 +0000
	I0108 21:27:41.917517  240774 command_runner.go:130] >  Birth: 2024-01-08 21:09:21.185659885 +0000
	I0108 21:27:41.917726  240774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:27:41.934727  240774 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 21:27:41.934819  240774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:27:41.960288  240774 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 21:27:41.960353  240774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 21:27:41.960365  240774 start.go:475] detecting cgroup driver to use...
	I0108 21:27:41.960395  240774 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 21:27:41.960441  240774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:27:41.973809  240774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:27:41.983064  240774 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:27:41.983114  240774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:27:41.994665  240774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:27:42.006912  240774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:27:42.081218  240774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:27:42.157177  240774 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 21:27:42.157215  240774 docker.go:219] disabling docker service ...
	I0108 21:27:42.157263  240774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:27:42.174228  240774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:27:42.184267  240774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:27:42.268967  240774 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 21:27:42.269041  240774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:27:42.354123  240774 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 21:27:42.354203  240774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:27:42.364101  240774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:27:42.377288  240774 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 21:27:42.378102  240774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:27:42.378163  240774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:27:42.386276  240774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:27:42.386354  240774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:27:42.394553  240774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:27:42.402538  240774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:27:42.410534  240774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:27:42.417976  240774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:27:42.424350  240774 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:27:42.424997  240774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:27:42.432080  240774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:27:42.507927  240774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:27:42.613083  240774 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:27:42.613147  240774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:27:42.616201  240774 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 21:27:42.616236  240774 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:27:42.616246  240774 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I0108 21:27:42.616257  240774 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:27:42.616265  240774 command_runner.go:130] > Access: 2024-01-08 21:27:42.599174349 +0000
	I0108 21:27:42.616286  240774 command_runner.go:130] > Modify: 2024-01-08 21:27:42.599174349 +0000
	I0108 21:27:42.616299  240774 command_runner.go:130] > Change: 2024-01-08 21:27:42.599174349 +0000
	I0108 21:27:42.616303  240774 command_runner.go:130] >  Birth: -
	I0108 21:27:42.616323  240774 start.go:543] Will wait 60s for crictl version
	I0108 21:27:42.616364  240774 ssh_runner.go:195] Run: which crictl
	I0108 21:27:42.619358  240774 command_runner.go:130] > /usr/bin/crictl
	I0108 21:27:42.619429  240774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:27:42.650331  240774 command_runner.go:130] > Version:  0.1.0
	I0108 21:27:42.650352  240774 command_runner.go:130] > RuntimeName:  cri-o
	I0108 21:27:42.650356  240774 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 21:27:42.650362  240774 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:27:42.652560  240774 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 21:27:42.652651  240774 ssh_runner.go:195] Run: crio --version
	I0108 21:27:42.686012  240774 command_runner.go:130] > crio version 1.24.6
	I0108 21:27:42.686034  240774 command_runner.go:130] > Version:          1.24.6
	I0108 21:27:42.686050  240774 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 21:27:42.686058  240774 command_runner.go:130] > GitTreeState:     clean
	I0108 21:27:42.686067  240774 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 21:27:42.686076  240774 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 21:27:42.686083  240774 command_runner.go:130] > Compiler:         gc
	I0108 21:27:42.686091  240774 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:27:42.686104  240774 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:27:42.686121  240774 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:27:42.686126  240774 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:27:42.686132  240774 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:27:42.686203  240774 ssh_runner.go:195] Run: crio --version
	I0108 21:27:42.716398  240774 command_runner.go:130] > crio version 1.24.6
	I0108 21:27:42.716417  240774 command_runner.go:130] > Version:          1.24.6
	I0108 21:27:42.716424  240774 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 21:27:42.716428  240774 command_runner.go:130] > GitTreeState:     clean
	I0108 21:27:42.716434  240774 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 21:27:42.716438  240774 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 21:27:42.716442  240774 command_runner.go:130] > Compiler:         gc
	I0108 21:27:42.716447  240774 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:27:42.716452  240774 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:27:42.716462  240774 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:27:42.716473  240774 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:27:42.716477  240774 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:27:42.719967  240774 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 21:27:42.721265  240774 cli_runner.go:164] Run: docker network inspect multinode-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:27:42.737181  240774 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 21:27:42.740544  240774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:27:42.750447  240774 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:27:42.750496  240774 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:27:42.801760  240774 command_runner.go:130] > {
	I0108 21:27:42.801780  240774 command_runner.go:130] >   "images": [
	I0108 21:27:42.801784  240774 command_runner.go:130] >     {
	I0108 21:27:42.801791  240774 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 21:27:42.801796  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.801804  240774 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 21:27:42.801808  240774 command_runner.go:130] >       ],
	I0108 21:27:42.801812  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.801826  240774 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 21:27:42.801833  240774 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 21:27:42.801839  240774 command_runner.go:130] >       ],
	I0108 21:27:42.801844  240774 command_runner.go:130] >       "size": "65258016",
	I0108 21:27:42.801851  240774 command_runner.go:130] >       "uid": null,
	I0108 21:27:42.801856  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.801862  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.801869  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.801873  240774 command_runner.go:130] >     },
	I0108 21:27:42.801879  240774 command_runner.go:130] >     {
	I0108 21:27:42.801888  240774 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 21:27:42.801895  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.801901  240774 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 21:27:42.801907  240774 command_runner.go:130] >       ],
	I0108 21:27:42.801911  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.801921  240774 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 21:27:42.801929  240774 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 21:27:42.801934  240774 command_runner.go:130] >       ],
	I0108 21:27:42.801941  240774 command_runner.go:130] >       "size": "31470524",
	I0108 21:27:42.801948  240774 command_runner.go:130] >       "uid": null,
	I0108 21:27:42.801952  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.801958  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.801962  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.801965  240774 command_runner.go:130] >     },
	I0108 21:27:42.801969  240774 command_runner.go:130] >     {
	I0108 21:27:42.801975  240774 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 21:27:42.801981  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.801986  240774 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 21:27:42.801992  240774 command_runner.go:130] >       ],
	I0108 21:27:42.801999  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.802006  240774 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 21:27:42.802016  240774 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 21:27:42.802021  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802028  240774 command_runner.go:130] >       "size": "53621675",
	I0108 21:27:42.802035  240774 command_runner.go:130] >       "uid": null,
	I0108 21:27:42.802041  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.802047  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.802058  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.802064  240774 command_runner.go:130] >     },
	I0108 21:27:42.802071  240774 command_runner.go:130] >     {
	I0108 21:27:42.802077  240774 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 21:27:42.802083  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.802088  240774 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 21:27:42.802094  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802098  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.802107  240774 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 21:27:42.802118  240774 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 21:27:42.802133  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802140  240774 command_runner.go:130] >       "size": "295456551",
	I0108 21:27:42.802144  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.802160  240774 command_runner.go:130] >         "value": "0"
	I0108 21:27:42.802163  240774 command_runner.go:130] >       },
	I0108 21:27:42.802169  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.802173  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.802180  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.802183  240774 command_runner.go:130] >     },
	I0108 21:27:42.802187  240774 command_runner.go:130] >     {
	I0108 21:27:42.802198  240774 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 21:27:42.802205  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.802210  240774 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 21:27:42.802215  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802222  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.802231  240774 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 21:27:42.802238  240774 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 21:27:42.802244  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802251  240774 command_runner.go:130] >       "size": "127226832",
	I0108 21:27:42.802255  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.802259  240774 command_runner.go:130] >         "value": "0"
	I0108 21:27:42.802262  240774 command_runner.go:130] >       },
	I0108 21:27:42.802266  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.802272  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.802276  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.802280  240774 command_runner.go:130] >     },
	I0108 21:27:42.802284  240774 command_runner.go:130] >     {
	I0108 21:27:42.802292  240774 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 21:27:42.802296  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.802304  240774 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 21:27:42.802307  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802311  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.802319  240774 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 21:27:42.802328  240774 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 21:27:42.802332  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802339  240774 command_runner.go:130] >       "size": "123261750",
	I0108 21:27:42.802345  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.802348  240774 command_runner.go:130] >         "value": "0"
	I0108 21:27:42.802352  240774 command_runner.go:130] >       },
	I0108 21:27:42.802356  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.802360  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.802364  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.802368  240774 command_runner.go:130] >     },
	I0108 21:27:42.802371  240774 command_runner.go:130] >     {
	I0108 21:27:42.802377  240774 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 21:27:42.802383  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.802388  240774 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 21:27:42.802392  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802396  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.802406  240774 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 21:27:42.802413  240774 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 21:27:42.802418  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802423  240774 command_runner.go:130] >       "size": "74749335",
	I0108 21:27:42.802429  240774 command_runner.go:130] >       "uid": null,
	I0108 21:27:42.802435  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.802439  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.802445  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.802449  240774 command_runner.go:130] >     },
	I0108 21:27:42.802454  240774 command_runner.go:130] >     {
	I0108 21:27:42.802464  240774 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 21:27:42.802468  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.802473  240774 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 21:27:42.802479  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802483  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.802503  240774 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 21:27:42.802512  240774 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 21:27:42.802516  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802520  240774 command_runner.go:130] >       "size": "61551410",
	I0108 21:27:42.802526  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.802530  240774 command_runner.go:130] >         "value": "0"
	I0108 21:27:42.802536  240774 command_runner.go:130] >       },
	I0108 21:27:42.802543  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.802550  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.802554  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.802560  240774 command_runner.go:130] >     },
	I0108 21:27:42.802563  240774 command_runner.go:130] >     {
	I0108 21:27:42.802569  240774 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 21:27:42.802575  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.802580  240774 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 21:27:42.802586  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802590  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.802596  240774 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 21:27:42.802605  240774 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 21:27:42.802609  240774 command_runner.go:130] >       ],
	I0108 21:27:42.802613  240774 command_runner.go:130] >       "size": "750414",
	I0108 21:27:42.802621  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.802626  240774 command_runner.go:130] >         "value": "65535"
	I0108 21:27:42.802630  240774 command_runner.go:130] >       },
	I0108 21:27:42.802634  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.802643  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.802647  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.802650  240774 command_runner.go:130] >     }
	I0108 21:27:42.802656  240774 command_runner.go:130] >   ]
	I0108 21:27:42.802659  240774 command_runner.go:130] > }
	I0108 21:27:42.803405  240774 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:27:42.803425  240774 crio.go:415] Images already preloaded, skipping extraction
	I0108 21:27:42.803468  240774 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:27:42.835057  240774 command_runner.go:130] > {
	I0108 21:27:42.835076  240774 command_runner.go:130] >   "images": [
	I0108 21:27:42.835080  240774 command_runner.go:130] >     {
	I0108 21:27:42.835088  240774 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 21:27:42.835092  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835098  240774 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 21:27:42.835102  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835106  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835114  240774 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 21:27:42.835124  240774 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 21:27:42.835128  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835135  240774 command_runner.go:130] >       "size": "65258016",
	I0108 21:27:42.835139  240774 command_runner.go:130] >       "uid": null,
	I0108 21:27:42.835146  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835151  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835160  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835164  240774 command_runner.go:130] >     },
	I0108 21:27:42.835167  240774 command_runner.go:130] >     {
	I0108 21:27:42.835173  240774 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 21:27:42.835177  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835182  240774 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 21:27:42.835185  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835189  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835197  240774 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 21:27:42.835204  240774 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 21:27:42.835207  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835216  240774 command_runner.go:130] >       "size": "31470524",
	I0108 21:27:42.835223  240774 command_runner.go:130] >       "uid": null,
	I0108 21:27:42.835228  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835232  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835238  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835242  240774 command_runner.go:130] >     },
	I0108 21:27:42.835248  240774 command_runner.go:130] >     {
	I0108 21:27:42.835258  240774 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 21:27:42.835265  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835270  240774 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 21:27:42.835276  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835280  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835289  240774 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 21:27:42.835299  240774 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 21:27:42.835305  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835310  240774 command_runner.go:130] >       "size": "53621675",
	I0108 21:27:42.835316  240774 command_runner.go:130] >       "uid": null,
	I0108 21:27:42.835320  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835326  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835330  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835334  240774 command_runner.go:130] >     },
	I0108 21:27:42.835340  240774 command_runner.go:130] >     {
	I0108 21:27:42.835347  240774 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 21:27:42.835353  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835359  240774 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 21:27:42.835367  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835372  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835379  240774 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 21:27:42.835388  240774 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 21:27:42.835401  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835408  240774 command_runner.go:130] >       "size": "295456551",
	I0108 21:27:42.835412  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.835419  240774 command_runner.go:130] >         "value": "0"
	I0108 21:27:42.835422  240774 command_runner.go:130] >       },
	I0108 21:27:42.835429  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835433  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835439  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835443  240774 command_runner.go:130] >     },
	I0108 21:27:42.835449  240774 command_runner.go:130] >     {
	I0108 21:27:42.835455  240774 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 21:27:42.835461  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835466  240774 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 21:27:42.835472  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835479  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835489  240774 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 21:27:42.835498  240774 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 21:27:42.835504  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835509  240774 command_runner.go:130] >       "size": "127226832",
	I0108 21:27:42.835514  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.835519  240774 command_runner.go:130] >         "value": "0"
	I0108 21:27:42.835524  240774 command_runner.go:130] >       },
	I0108 21:27:42.835529  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835535  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835539  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835545  240774 command_runner.go:130] >     },
	I0108 21:27:42.835549  240774 command_runner.go:130] >     {
	I0108 21:27:42.835557  240774 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 21:27:42.835563  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835569  240774 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 21:27:42.835575  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835579  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835595  240774 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 21:27:42.835605  240774 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 21:27:42.835611  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835615  240774 command_runner.go:130] >       "size": "123261750",
	I0108 21:27:42.835621  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.835625  240774 command_runner.go:130] >         "value": "0"
	I0108 21:27:42.835631  240774 command_runner.go:130] >       },
	I0108 21:27:42.835636  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835642  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835656  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835663  240774 command_runner.go:130] >     },
	I0108 21:27:42.835667  240774 command_runner.go:130] >     {
	I0108 21:27:42.835675  240774 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 21:27:42.835681  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835687  240774 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 21:27:42.835693  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835697  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835706  240774 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 21:27:42.835718  240774 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 21:27:42.835725  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835729  240774 command_runner.go:130] >       "size": "74749335",
	I0108 21:27:42.835735  240774 command_runner.go:130] >       "uid": null,
	I0108 21:27:42.835740  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835746  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835750  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835754  240774 command_runner.go:130] >     },
	I0108 21:27:42.835759  240774 command_runner.go:130] >     {
	I0108 21:27:42.835766  240774 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 21:27:42.835772  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835778  240774 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 21:27:42.835783  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835788  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835820  240774 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 21:27:42.835833  240774 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 21:27:42.835836  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835841  240774 command_runner.go:130] >       "size": "61551410",
	I0108 21:27:42.835849  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.835856  240774 command_runner.go:130] >         "value": "0"
	I0108 21:27:42.835860  240774 command_runner.go:130] >       },
	I0108 21:27:42.835867  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835871  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835877  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835881  240774 command_runner.go:130] >     },
	I0108 21:27:42.835885  240774 command_runner.go:130] >     {
	I0108 21:27:42.835891  240774 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 21:27:42.835898  240774 command_runner.go:130] >       "repoTags": [
	I0108 21:27:42.835903  240774 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 21:27:42.835909  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835913  240774 command_runner.go:130] >       "repoDigests": [
	I0108 21:27:42.835922  240774 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 21:27:42.835931  240774 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 21:27:42.835935  240774 command_runner.go:130] >       ],
	I0108 21:27:42.835941  240774 command_runner.go:130] >       "size": "750414",
	I0108 21:27:42.835945  240774 command_runner.go:130] >       "uid": {
	I0108 21:27:42.835954  240774 command_runner.go:130] >         "value": "65535"
	I0108 21:27:42.835960  240774 command_runner.go:130] >       },
	I0108 21:27:42.835964  240774 command_runner.go:130] >       "username": "",
	I0108 21:27:42.835970  240774 command_runner.go:130] >       "spec": null,
	I0108 21:27:42.835974  240774 command_runner.go:130] >       "pinned": false
	I0108 21:27:42.835980  240774 command_runner.go:130] >     }
	I0108 21:27:42.835984  240774 command_runner.go:130] >   ]
	I0108 21:27:42.835989  240774 command_runner.go:130] > }
	I0108 21:27:42.836095  240774 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:27:42.836106  240774 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:27:42.836160  240774 ssh_runner.go:195] Run: crio config
	I0108 21:27:42.870514  240774 command_runner.go:130] ! time="2024-01-08 21:27:42.870094793Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 21:27:42.870545  240774 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 21:27:42.875273  240774 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 21:27:42.875303  240774 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 21:27:42.875312  240774 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 21:27:42.875318  240774 command_runner.go:130] > #
	I0108 21:27:42.875329  240774 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 21:27:42.875350  240774 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 21:27:42.875360  240774 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 21:27:42.875370  240774 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 21:27:42.875376  240774 command_runner.go:130] > # reload'.
	I0108 21:27:42.875385  240774 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 21:27:42.875393  240774 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 21:27:42.875402  240774 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 21:27:42.875410  240774 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 21:27:42.875416  240774 command_runner.go:130] > [crio]
	I0108 21:27:42.875423  240774 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 21:27:42.875430  240774 command_runner.go:130] > # containers images, in this directory.
	I0108 21:27:42.875439  240774 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 21:27:42.875448  240774 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 21:27:42.875455  240774 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 21:27:42.875464  240774 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 21:27:42.875472  240774 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 21:27:42.875479  240774 command_runner.go:130] > # storage_driver = "vfs"
	I0108 21:27:42.875485  240774 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 21:27:42.875493  240774 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 21:27:42.875498  240774 command_runner.go:130] > # storage_option = [
	I0108 21:27:42.875501  240774 command_runner.go:130] > # ]
	I0108 21:27:42.875508  240774 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 21:27:42.875520  240774 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 21:27:42.875527  240774 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 21:27:42.875532  240774 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 21:27:42.875540  240774 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 21:27:42.875545  240774 command_runner.go:130] > # always happen on a node reboot
	I0108 21:27:42.875552  240774 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 21:27:42.875558  240774 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 21:27:42.875566  240774 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 21:27:42.875578  240774 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 21:27:42.875585  240774 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 21:27:42.875596  240774 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 21:27:42.875606  240774 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 21:27:42.875612  240774 command_runner.go:130] > # internal_wipe = true
	I0108 21:27:42.875618  240774 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 21:27:42.875627  240774 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 21:27:42.875635  240774 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 21:27:42.875640  240774 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 21:27:42.875650  240774 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 21:27:42.875662  240774 command_runner.go:130] > [crio.api]
	I0108 21:27:42.875673  240774 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 21:27:42.875680  240774 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 21:27:42.875685  240774 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 21:27:42.875692  240774 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 21:27:42.875699  240774 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 21:27:42.875706  240774 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 21:27:42.875721  240774 command_runner.go:130] > # stream_port = "0"
	I0108 21:27:42.875729  240774 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 21:27:42.875736  240774 command_runner.go:130] > # stream_enable_tls = false
	I0108 21:27:42.875742  240774 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 21:27:42.875748  240774 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 21:27:42.875754  240774 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 21:27:42.875762  240774 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 21:27:42.875769  240774 command_runner.go:130] > # minutes.
	I0108 21:27:42.875773  240774 command_runner.go:130] > # stream_tls_cert = ""
	I0108 21:27:42.875782  240774 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 21:27:42.875788  240774 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 21:27:42.875797  240774 command_runner.go:130] > # stream_tls_key = ""
	I0108 21:27:42.875805  240774 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 21:27:42.875813  240774 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 21:27:42.875820  240774 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 21:27:42.875824  240774 command_runner.go:130] > # stream_tls_ca = ""
	I0108 21:27:42.875831  240774 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:27:42.875838  240774 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 21:27:42.875845  240774 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:27:42.875852  240774 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 21:27:42.875889  240774 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 21:27:42.875901  240774 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 21:27:42.875909  240774 command_runner.go:130] > [crio.runtime]
	I0108 21:27:42.875915  240774 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 21:27:42.875923  240774 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 21:27:42.875940  240774 command_runner.go:130] > # "nofile=1024:2048"
	I0108 21:27:42.875956  240774 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 21:27:42.875963  240774 command_runner.go:130] > # default_ulimits = [
	I0108 21:27:42.875966  240774 command_runner.go:130] > # ]
	I0108 21:27:42.875977  240774 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 21:27:42.875983  240774 command_runner.go:130] > # no_pivot = false
	I0108 21:27:42.875989  240774 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 21:27:42.875997  240774 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 21:27:42.876004  240774 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 21:27:42.876010  240774 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 21:27:42.876017  240774 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 21:27:42.876029  240774 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:27:42.876036  240774 command_runner.go:130] > # conmon = ""
	I0108 21:27:42.876040  240774 command_runner.go:130] > # Cgroup setting for conmon
	I0108 21:27:42.876049  240774 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 21:27:42.876055  240774 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 21:27:42.876061  240774 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 21:27:42.876068  240774 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 21:27:42.876074  240774 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:27:42.876081  240774 command_runner.go:130] > # conmon_env = [
	I0108 21:27:42.876084  240774 command_runner.go:130] > # ]
	I0108 21:27:42.876092  240774 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 21:27:42.876102  240774 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 21:27:42.876110  240774 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 21:27:42.876115  240774 command_runner.go:130] > # default_env = [
	I0108 21:27:42.876118  240774 command_runner.go:130] > # ]
	I0108 21:27:42.876126  240774 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 21:27:42.876132  240774 command_runner.go:130] > # selinux = false
	I0108 21:27:42.876139  240774 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 21:27:42.876151  240774 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 21:27:42.876159  240774 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 21:27:42.876163  240774 command_runner.go:130] > # seccomp_profile = ""
	I0108 21:27:42.876170  240774 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 21:27:42.876176  240774 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 21:27:42.876185  240774 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 21:27:42.876192  240774 command_runner.go:130] > # which might increase security.
	I0108 21:27:42.876196  240774 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 21:27:42.876204  240774 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 21:27:42.876212  240774 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 21:27:42.876220  240774 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 21:27:42.876231  240774 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 21:27:42.876238  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:42.876243  240774 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 21:27:42.876251  240774 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 21:27:42.876255  240774 command_runner.go:130] > # the cgroup blockio controller.
	I0108 21:27:42.876261  240774 command_runner.go:130] > # blockio_config_file = ""
	I0108 21:27:42.876267  240774 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 21:27:42.876273  240774 command_runner.go:130] > # irqbalance daemon.
	I0108 21:27:42.876279  240774 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 21:27:42.876287  240774 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 21:27:42.876297  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:42.876304  240774 command_runner.go:130] > # rdt_config_file = ""
	I0108 21:27:42.876309  240774 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 21:27:42.876315  240774 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 21:27:42.876321  240774 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 21:27:42.876327  240774 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 21:27:42.876336  240774 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 21:27:42.876344  240774 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 21:27:42.876353  240774 command_runner.go:130] > # will be added.
	I0108 21:27:42.876361  240774 command_runner.go:130] > # default_capabilities = [
	I0108 21:27:42.876364  240774 command_runner.go:130] > # 	"CHOWN",
	I0108 21:27:42.876369  240774 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 21:27:42.876373  240774 command_runner.go:130] > # 	"FSETID",
	I0108 21:27:42.876379  240774 command_runner.go:130] > # 	"FOWNER",
	I0108 21:27:42.876383  240774 command_runner.go:130] > # 	"SETGID",
	I0108 21:27:42.876389  240774 command_runner.go:130] > # 	"SETUID",
	I0108 21:27:42.876392  240774 command_runner.go:130] > # 	"SETPCAP",
	I0108 21:27:42.876398  240774 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 21:27:42.876402  240774 command_runner.go:130] > # 	"KILL",
	I0108 21:27:42.876406  240774 command_runner.go:130] > # ]
	I0108 21:27:42.876416  240774 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 21:27:42.876424  240774 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 21:27:42.876431  240774 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 21:27:42.876437  240774 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 21:27:42.876445  240774 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:27:42.876449  240774 command_runner.go:130] > # default_sysctls = [
	I0108 21:27:42.876455  240774 command_runner.go:130] > # ]
	I0108 21:27:42.876462  240774 command_runner.go:130] > # List of devices on the host that a
	I0108 21:27:42.876468  240774 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 21:27:42.876475  240774 command_runner.go:130] > # allowed_devices = [
	I0108 21:27:42.876479  240774 command_runner.go:130] > # 	"/dev/fuse",
	I0108 21:27:42.876484  240774 command_runner.go:130] > # ]
	I0108 21:27:42.876489  240774 command_runner.go:130] > # List of additional devices. specified as
	I0108 21:27:42.876520  240774 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 21:27:42.876528  240774 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 21:27:42.876533  240774 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:27:42.876543  240774 command_runner.go:130] > # additional_devices = [
	I0108 21:27:42.876549  240774 command_runner.go:130] > # ]
	I0108 21:27:42.876554  240774 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 21:27:42.876561  240774 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 21:27:42.876565  240774 command_runner.go:130] > # 	"/etc/cdi",
	I0108 21:27:42.876571  240774 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 21:27:42.876575  240774 command_runner.go:130] > # ]
	I0108 21:27:42.876581  240774 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 21:27:42.876592  240774 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 21:27:42.876598  240774 command_runner.go:130] > # Defaults to false.
	I0108 21:27:42.876604  240774 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 21:27:42.876612  240774 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 21:27:42.876620  240774 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 21:27:42.876626  240774 command_runner.go:130] > # hooks_dir = [
	I0108 21:27:42.876631  240774 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 21:27:42.876635  240774 command_runner.go:130] > # ]
	I0108 21:27:42.876640  240774 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 21:27:42.876651  240774 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 21:27:42.876659  240774 command_runner.go:130] > # its default mounts from the following two files:
	I0108 21:27:42.876662  240774 command_runner.go:130] > #
	I0108 21:27:42.876670  240774 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 21:27:42.876679  240774 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 21:27:42.876686  240774 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 21:27:42.876690  240774 command_runner.go:130] > #
	I0108 21:27:42.876696  240774 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 21:27:42.876704  240774 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 21:27:42.876714  240774 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 21:27:42.876721  240774 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 21:27:42.876724  240774 command_runner.go:130] > #
	I0108 21:27:42.876731  240774 command_runner.go:130] > # default_mounts_file = ""
	I0108 21:27:42.876737  240774 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 21:27:42.876746  240774 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 21:27:42.876750  240774 command_runner.go:130] > # pids_limit = 0
	I0108 21:27:42.876758  240774 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 21:27:42.876766  240774 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 21:27:42.876775  240774 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 21:27:42.876785  240774 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 21:27:42.876791  240774 command_runner.go:130] > # log_size_max = -1
	I0108 21:27:42.876798  240774 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 21:27:42.876806  240774 command_runner.go:130] > # log_to_journald = false
	I0108 21:27:42.876814  240774 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 21:27:42.876821  240774 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 21:27:42.876826  240774 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 21:27:42.876834  240774 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 21:27:42.876844  240774 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 21:27:42.876851  240774 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 21:27:42.876857  240774 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 21:27:42.876863  240774 command_runner.go:130] > # read_only = false
	I0108 21:27:42.876869  240774 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 21:27:42.876876  240774 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 21:27:42.876883  240774 command_runner.go:130] > # live configuration reload.
	I0108 21:27:42.876887  240774 command_runner.go:130] > # log_level = "info"
	I0108 21:27:42.876895  240774 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 21:27:42.876902  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:42.876906  240774 command_runner.go:130] > # log_filter = ""
	I0108 21:27:42.876913  240774 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 21:27:42.876921  240774 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 21:27:42.876928  240774 command_runner.go:130] > # separated by comma.
	I0108 21:27:42.876932  240774 command_runner.go:130] > # uid_mappings = ""
	I0108 21:27:42.876940  240774 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 21:27:42.876948  240774 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 21:27:42.876954  240774 command_runner.go:130] > # separated by comma.
	I0108 21:27:42.876960  240774 command_runner.go:130] > # gid_mappings = ""
	I0108 21:27:42.876968  240774 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 21:27:42.876976  240774 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:27:42.876984  240774 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:27:42.876989  240774 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 21:27:42.876995  240774 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 21:27:42.877003  240774 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:27:42.877012  240774 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:27:42.877018  240774 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 21:27:42.877024  240774 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 21:27:42.877031  240774 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 21:27:42.877039  240774 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 21:27:42.877046  240774 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 21:27:42.877055  240774 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 21:27:42.877065  240774 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 21:27:42.877070  240774 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 21:27:42.877077  240774 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 21:27:42.877081  240774 command_runner.go:130] > # drop_infra_ctr = true
	I0108 21:27:42.877092  240774 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 21:27:42.877100  240774 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 21:27:42.877108  240774 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 21:27:42.877114  240774 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 21:27:42.877120  240774 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 21:27:42.877127  240774 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 21:27:42.877131  240774 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 21:27:42.877140  240774 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 21:27:42.877149  240774 command_runner.go:130] > # pinns_path = ""
	I0108 21:27:42.877156  240774 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 21:27:42.877164  240774 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 21:27:42.877172  240774 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 21:27:42.877178  240774 command_runner.go:130] > # default_runtime = "runc"
	I0108 21:27:42.877184  240774 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 21:27:42.877193  240774 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 21:27:42.877206  240774 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 21:27:42.877213  240774 command_runner.go:130] > # creation as a file is not desired either.
	I0108 21:27:42.877221  240774 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 21:27:42.877233  240774 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 21:27:42.877240  240774 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 21:27:42.877243  240774 command_runner.go:130] > # ]
	I0108 21:27:42.877252  240774 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 21:27:42.877261  240774 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 21:27:42.877269  240774 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 21:27:42.877277  240774 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 21:27:42.877282  240774 command_runner.go:130] > #
	I0108 21:27:42.877287  240774 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 21:27:42.877294  240774 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 21:27:42.877298  240774 command_runner.go:130] > #  runtime_type = "oci"
	I0108 21:27:42.877306  240774 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 21:27:42.877310  240774 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 21:27:42.877317  240774 command_runner.go:130] > #  allowed_annotations = []
	I0108 21:27:42.877320  240774 command_runner.go:130] > # Where:
	I0108 21:27:42.877329  240774 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 21:27:42.877335  240774 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 21:27:42.877344  240774 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 21:27:42.877355  240774 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 21:27:42.877361  240774 command_runner.go:130] > #   in $PATH.
	I0108 21:27:42.877367  240774 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 21:27:42.877374  240774 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 21:27:42.877380  240774 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 21:27:42.877386  240774 command_runner.go:130] > #   state.
	I0108 21:27:42.877392  240774 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 21:27:42.877398  240774 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 21:27:42.877407  240774 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 21:27:42.877414  240774 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 21:27:42.877421  240774 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 21:27:42.877430  240774 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 21:27:42.877436  240774 command_runner.go:130] > #   The currently recognized values are:
	I0108 21:27:42.877458  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 21:27:42.877474  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 21:27:42.877483  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 21:27:42.877489  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 21:27:42.877497  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 21:27:42.877508  240774 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 21:27:42.877516  240774 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 21:27:42.877524  240774 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 21:27:42.877531  240774 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 21:27:42.877538  240774 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 21:27:42.877543  240774 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 21:27:42.877549  240774 command_runner.go:130] > runtime_type = "oci"
	I0108 21:27:42.877554  240774 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 21:27:42.877560  240774 command_runner.go:130] > runtime_config_path = ""
	I0108 21:27:42.877564  240774 command_runner.go:130] > monitor_path = ""
	I0108 21:27:42.877570  240774 command_runner.go:130] > monitor_cgroup = ""
	I0108 21:27:42.877574  240774 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 21:27:42.877623  240774 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 21:27:42.877633  240774 command_runner.go:130] > # running containers
	I0108 21:27:42.877637  240774 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 21:27:42.877643  240774 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 21:27:42.877651  240774 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 21:27:42.877661  240774 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 21:27:42.877671  240774 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 21:27:42.877677  240774 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 21:27:42.877682  240774 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 21:27:42.877688  240774 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 21:27:42.877693  240774 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 21:27:42.877700  240774 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 21:27:42.877706  240774 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 21:27:42.877714  240774 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 21:27:42.877721  240774 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 21:27:42.877731  240774 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 21:27:42.877740  240774 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 21:27:42.877748  240774 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 21:27:42.877757  240774 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 21:27:42.877766  240774 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 21:27:42.877774  240774 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 21:27:42.877784  240774 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 21:27:42.877790  240774 command_runner.go:130] > # Example:
	I0108 21:27:42.877795  240774 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 21:27:42.877805  240774 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 21:27:42.877812  240774 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 21:27:42.877817  240774 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 21:27:42.877823  240774 command_runner.go:130] > # cpuset = 0
	I0108 21:27:42.877827  240774 command_runner.go:130] > # cpushares = "0-1"
	I0108 21:27:42.877831  240774 command_runner.go:130] > # Where:
	I0108 21:27:42.877837  240774 command_runner.go:130] > # The workload name is workload-type.
	I0108 21:27:42.877846  240774 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 21:27:42.877854  240774 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 21:27:42.877862  240774 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 21:27:42.877872  240774 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 21:27:42.877880  240774 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 21:27:42.877886  240774 command_runner.go:130] > # 
	I0108 21:27:42.877892  240774 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 21:27:42.877897  240774 command_runner.go:130] > #
	I0108 21:27:42.877903  240774 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 21:27:42.877911  240774 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 21:27:42.877919  240774 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 21:27:42.877943  240774 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 21:27:42.877951  240774 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 21:27:42.877956  240774 command_runner.go:130] > [crio.image]
	I0108 21:27:42.877962  240774 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 21:27:42.877968  240774 command_runner.go:130] > # default_transport = "docker://"
	I0108 21:27:42.877974  240774 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 21:27:42.877983  240774 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:27:42.877993  240774 command_runner.go:130] > # global_auth_file = ""
	I0108 21:27:42.878000  240774 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 21:27:42.878006  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:42.878011  240774 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 21:27:42.878018  240774 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 21:27:42.878026  240774 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:27:42.878033  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:42.878041  240774 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 21:27:42.878046  240774 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 21:27:42.878055  240774 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 21:27:42.878061  240774 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 21:27:42.878071  240774 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 21:27:42.878077  240774 command_runner.go:130] > # pause_command = "/pause"
	I0108 21:27:42.878083  240774 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 21:27:42.878091  240774 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 21:27:42.878099  240774 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 21:27:42.878108  240774 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 21:27:42.878115  240774 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 21:27:42.878119  240774 command_runner.go:130] > # signature_policy = ""
	I0108 21:27:42.878130  240774 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 21:27:42.878138  240774 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 21:27:42.878142  240774 command_runner.go:130] > # changing them here.
	I0108 21:27:42.878152  240774 command_runner.go:130] > # insecure_registries = [
	I0108 21:27:42.878156  240774 command_runner.go:130] > # ]
	I0108 21:27:42.878164  240774 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 21:27:42.878172  240774 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 21:27:42.878176  240774 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 21:27:42.878183  240774 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 21:27:42.878188  240774 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 21:27:42.878199  240774 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 21:27:42.878205  240774 command_runner.go:130] > # CNI plugins.
	I0108 21:27:42.878211  240774 command_runner.go:130] > [crio.network]
	I0108 21:27:42.878219  240774 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 21:27:42.878227  240774 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 21:27:42.878234  240774 command_runner.go:130] > # cni_default_network = ""
	I0108 21:27:42.878240  240774 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 21:27:42.878246  240774 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 21:27:42.878252  240774 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 21:27:42.878258  240774 command_runner.go:130] > # plugin_dirs = [
	I0108 21:27:42.878262  240774 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 21:27:42.878268  240774 command_runner.go:130] > # ]
	I0108 21:27:42.878273  240774 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 21:27:42.878280  240774 command_runner.go:130] > [crio.metrics]
	I0108 21:27:42.878285  240774 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 21:27:42.878291  240774 command_runner.go:130] > # enable_metrics = false
	I0108 21:27:42.878296  240774 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 21:27:42.878302  240774 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 21:27:42.878313  240774 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 21:27:42.878321  240774 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 21:27:42.878329  240774 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 21:27:42.878336  240774 command_runner.go:130] > # metrics_collectors = [
	I0108 21:27:42.878340  240774 command_runner.go:130] > # 	"operations",
	I0108 21:27:42.878347  240774 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 21:27:42.878352  240774 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 21:27:42.878358  240774 command_runner.go:130] > # 	"operations_errors",
	I0108 21:27:42.878362  240774 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 21:27:42.878368  240774 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 21:27:42.878373  240774 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 21:27:42.878379  240774 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 21:27:42.878383  240774 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 21:27:42.878390  240774 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 21:27:42.878394  240774 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 21:27:42.878401  240774 command_runner.go:130] > # 	"containers_oom_total",
	I0108 21:27:42.878406  240774 command_runner.go:130] > # 	"containers_oom",
	I0108 21:27:42.878412  240774 command_runner.go:130] > # 	"processes_defunct",
	I0108 21:27:42.878420  240774 command_runner.go:130] > # 	"operations_total",
	I0108 21:27:42.878426  240774 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 21:27:42.878431  240774 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 21:27:42.878438  240774 command_runner.go:130] > # 	"operations_errors_total",
	I0108 21:27:42.878442  240774 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 21:27:42.878449  240774 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 21:27:42.878453  240774 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 21:27:42.878460  240774 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 21:27:42.878464  240774 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 21:27:42.878471  240774 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 21:27:42.878474  240774 command_runner.go:130] > # ]
	I0108 21:27:42.878479  240774 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 21:27:42.878485  240774 command_runner.go:130] > # metrics_port = 9090
	I0108 21:27:42.878491  240774 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 21:27:42.878497  240774 command_runner.go:130] > # metrics_socket = ""
	I0108 21:27:42.878502  240774 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 21:27:42.878510  240774 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 21:27:42.878516  240774 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 21:27:42.878525  240774 command_runner.go:130] > # certificate on any modification event.
	I0108 21:27:42.878532  240774 command_runner.go:130] > # metrics_cert = ""
	I0108 21:27:42.878537  240774 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 21:27:42.878544  240774 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 21:27:42.878548  240774 command_runner.go:130] > # metrics_key = ""
	I0108 21:27:42.878556  240774 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 21:27:42.878562  240774 command_runner.go:130] > [crio.tracing]
	I0108 21:27:42.878567  240774 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 21:27:42.878574  240774 command_runner.go:130] > # enable_tracing = false
	I0108 21:27:42.878579  240774 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 21:27:42.878586  240774 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 21:27:42.878591  240774 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 21:27:42.878598  240774 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 21:27:42.878603  240774 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 21:27:42.878609  240774 command_runner.go:130] > [crio.stats]
	I0108 21:27:42.878615  240774 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 21:27:42.878622  240774 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 21:27:42.878631  240774 command_runner.go:130] > # stats_collection_period = 0
	I0108 21:27:42.878728  240774 cni.go:84] Creating CNI manager for ""
	I0108 21:27:42.878742  240774 cni.go:136] 1 nodes found, recommending kindnet
	I0108 21:27:42.878763  240774 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:27:42.878783  240774 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-379549 NodeName:multinode-379549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:27:42.878920  240774 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-379549"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:27:42.878982  240774 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-379549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-379549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:27:42.879030  240774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:27:42.886487  240774 command_runner.go:130] > kubeadm
	I0108 21:27:42.886502  240774 command_runner.go:130] > kubectl
	I0108 21:27:42.886506  240774 command_runner.go:130] > kubelet
	I0108 21:27:42.887214  240774 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:27:42.887268  240774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:27:42.894905  240774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0108 21:27:42.911044  240774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:27:42.927314  240774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0108 21:27:42.942960  240774 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:27:42.946058  240774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:27:42.955443  240774 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549 for IP: 192.168.58.2
	I0108 21:27:42.955478  240774 certs.go:190] acquiring lock for shared ca certs: {Name:mk66e763e1c1c88a577c7e7f60df668cab98f63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:42.955633  240774 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key
	I0108 21:27:42.955694  240774 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key
	I0108 21:27:42.955764  240774 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.key
	I0108 21:27:42.955781  240774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.crt with IP's: []
	I0108 21:27:43.225420  240774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.crt ...
	I0108 21:27:43.225463  240774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.crt: {Name:mk8ffe16e2f3322d1c7cab688a94620f83a3c975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:43.225635  240774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.key ...
	I0108 21:27:43.225646  240774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.key: {Name:mk77781229c17d19a2b38873336b6e0ece1b08e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:43.225756  240774 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.key.cee25041
	I0108 21:27:43.225780  240774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:27:43.397526  240774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.crt.cee25041 ...
	I0108 21:27:43.397567  240774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.crt.cee25041: {Name:mk82d99001e76e0162de615b4c11fd543b3a0f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:43.397743  240774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.key.cee25041 ...
	I0108 21:27:43.397760  240774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.key.cee25041: {Name:mk0887078ffe79c3858c7ae534419a92df88c8a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:43.397828  240774 certs.go:337] copying /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.crt
	I0108 21:27:43.397932  240774 certs.go:341] copying /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.key
	I0108 21:27:43.397988  240774 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.key
	I0108 21:27:43.398003  240774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.crt with IP's: []
	I0108 21:27:43.501242  240774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.crt ...
	I0108 21:27:43.501277  240774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.crt: {Name:mk4c3ff8f716031212ae77150b71567ec09329de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:43.501455  240774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.key ...
	I0108 21:27:43.501468  240774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.key: {Name:mk1715019535d4bb4cdbef6aa7d68c7e3bd5e9b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:43.501589  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 21:27:43.501616  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 21:27:43.501626  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 21:27:43.501638  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 21:27:43.501648  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:27:43.501658  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:27:43.501671  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:27:43.501687  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:27:43.501742  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem (1338 bytes)
	W0108 21:27:43.501778  240774 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648_empty.pem, impossibly tiny 0 bytes
	I0108 21:27:43.501790  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem (1679 bytes)
	I0108 21:27:43.501813  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:27:43.501837  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:27:43.501859  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem (1675 bytes)
	I0108 21:27:43.501897  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem (1708 bytes)
	I0108 21:27:43.501921  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:43.501935  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem -> /usr/share/ca-certificates/156648.pem
	I0108 21:27:43.501946  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> /usr/share/ca-certificates/1566482.pem
	I0108 21:27:43.502491  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:27:43.524364  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:27:43.544540  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:27:43.564441  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:27:43.584420  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:27:43.604403  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:27:43.624802  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:27:43.644889  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:27:43.665225  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:27:43.685477  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem --> /usr/share/ca-certificates/156648.pem (1338 bytes)
	I0108 21:27:43.705663  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem --> /usr/share/ca-certificates/1566482.pem (1708 bytes)
	I0108 21:27:43.725741  240774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:27:43.741014  240774 ssh_runner.go:195] Run: openssl version
	I0108 21:27:43.745913  240774 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 21:27:43.745993  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:27:43.754079  240774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:43.757044  240774 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:43.757108  240774 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:43.757150  240774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:43.762898  240774 command_runner.go:130] > b5213941
	I0108 21:27:43.763156  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:27:43.771180  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156648.pem && ln -fs /usr/share/ca-certificates/156648.pem /etc/ssl/certs/156648.pem"
	I0108 21:27:43.778971  240774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156648.pem
	I0108 21:27:43.781963  240774 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 21:15 /usr/share/ca-certificates/156648.pem
	I0108 21:27:43.781993  240774 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:15 /usr/share/ca-certificates/156648.pem
	I0108 21:27:43.782022  240774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156648.pem
	I0108 21:27:43.787589  240774 command_runner.go:130] > 51391683
	I0108 21:27:43.787841  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156648.pem /etc/ssl/certs/51391683.0"
	I0108 21:27:43.795365  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566482.pem && ln -fs /usr/share/ca-certificates/1566482.pem /etc/ssl/certs/1566482.pem"
	I0108 21:27:43.802981  240774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566482.pem
	I0108 21:27:43.805694  240774 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 21:15 /usr/share/ca-certificates/1566482.pem
	I0108 21:27:43.805718  240774 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:15 /usr/share/ca-certificates/1566482.pem
	I0108 21:27:43.805752  240774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566482.pem
	I0108 21:27:43.811279  240774 command_runner.go:130] > 3ec20f2e
	I0108 21:27:43.811530  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566482.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:27:43.819093  240774 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:27:43.821715  240774 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:27:43.821758  240774 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:27:43.821811  240774 kubeadm.go:404] StartCluster: {Name:multinode-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-379549 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:27:43.821894  240774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:27:43.821959  240774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:27:43.853629  240774 cri.go:89] found id: ""
	I0108 21:27:43.853685  240774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:27:43.861321  240774 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 21:27:43.861343  240774 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 21:27:43.861349  240774 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 21:27:43.861410  240774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:27:43.868864  240774 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:27:43.868909  240774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:27:43.875742  240774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 21:27:43.875766  240774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 21:27:43.875774  240774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 21:27:43.875787  240774 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:27:43.876493  240774 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:27:43.876543  240774 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:27:43.919023  240774 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 21:27:43.919059  240774 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 21:27:43.919154  240774 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:27:43.919167  240774 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:27:43.953313  240774 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:27:43.953335  240774 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:27:43.953426  240774 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 21:27:43.953436  240774 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 21:27:43.953495  240774 kubeadm.go:322] OS: Linux
	I0108 21:27:43.953507  240774 command_runner.go:130] > OS: Linux
	I0108 21:27:43.953561  240774 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 21:27:43.953584  240774 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 21:27:43.953657  240774 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 21:27:43.953668  240774 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 21:27:43.953731  240774 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 21:27:43.953742  240774 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 21:27:43.953805  240774 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 21:27:43.953815  240774 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 21:27:43.953876  240774 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 21:27:43.953896  240774 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 21:27:43.953980  240774 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 21:27:43.953993  240774 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 21:27:43.954062  240774 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 21:27:43.954073  240774 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 21:27:43.954142  240774 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 21:27:43.954157  240774 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 21:27:43.954228  240774 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 21:27:43.954239  240774 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 21:27:44.013799  240774 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:27:44.013836  240774 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:27:44.014002  240774 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:27:44.014016  240774 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:27:44.014126  240774 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:27:44.014155  240774 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:27:44.207325  240774 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:27:44.207341  240774 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:27:44.211434  240774 out.go:204]   - Generating certificates and keys ...
	I0108 21:27:44.211545  240774 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 21:27:44.211578  240774 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:27:44.211657  240774 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 21:27:44.211666  240774 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:27:44.296830  240774 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:27:44.296887  240774 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:27:44.547505  240774 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:27:44.547545  240774 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:27:44.898653  240774 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:27:44.898686  240774 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 21:27:45.033807  240774 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:27:45.033837  240774 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 21:27:45.190022  240774 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:27:45.190080  240774 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 21:27:45.190255  240774 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-379549] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 21:27:45.190271  240774 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-379549] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 21:27:45.368707  240774 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:27:45.368750  240774 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 21:27:45.368908  240774 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-379549] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 21:27:45.368924  240774 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-379549] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 21:27:45.543887  240774 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:27:45.543915  240774 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:27:45.740161  240774 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:27:45.740194  240774 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:27:46.013313  240774 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:27:46.013348  240774 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 21:27:46.013424  240774 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:27:46.013471  240774 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:27:46.098829  240774 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:27:46.098859  240774 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:27:46.180898  240774 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:27:46.180934  240774 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:27:46.292449  240774 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:27:46.292479  240774 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:27:46.572505  240774 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:27:46.572536  240774 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:27:46.572992  240774 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:27:46.573021  240774 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:27:46.576044  240774 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:27:46.576064  240774 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:27:46.578380  240774 out.go:204]   - Booting up control plane ...
	I0108 21:27:46.578499  240774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:27:46.578507  240774 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:27:46.578603  240774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:27:46.578612  240774 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:27:46.578697  240774 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:27:46.578705  240774 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:27:46.586278  240774 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:27:46.586305  240774 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:27:46.586979  240774 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:27:46.587004  240774 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:27:46.587055  240774 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:27:46.587069  240774 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:27:46.658810  240774 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:27:46.658835  240774 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:27:51.660314  240774 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001606 seconds
	I0108 21:27:51.660359  240774 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.001606 seconds
	I0108 21:27:51.660587  240774 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:27:51.660633  240774 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:27:51.673539  240774 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:27:51.673559  240774 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:27:52.191180  240774 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:27:52.191215  240774 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:27:52.191393  240774 kubeadm.go:322] [mark-control-plane] Marking the node multinode-379549 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:27:52.191412  240774 command_runner.go:130] > [mark-control-plane] Marking the node multinode-379549 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:27:52.702436  240774 kubeadm.go:322] [bootstrap-token] Using token: 0nmc1l.c2bsjaznc16a705j
	I0108 21:27:52.703810  240774 out.go:204]   - Configuring RBAC rules ...
	I0108 21:27:52.702487  240774 command_runner.go:130] > [bootstrap-token] Using token: 0nmc1l.c2bsjaznc16a705j
	I0108 21:27:52.703951  240774 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:27:52.703968  240774 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:27:52.707821  240774 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:27:52.707840  240774 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:27:52.713520  240774 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:27:52.713540  240774 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:27:52.716047  240774 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:27:52.716065  240774 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:27:52.718504  240774 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:27:52.718522  240774 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:27:52.722312  240774 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:27:52.722333  240774 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:27:52.732381  240774 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:27:52.732401  240774 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:27:52.932808  240774 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:27:52.932852  240774 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 21:27:53.117978  240774 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:27:53.118003  240774 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 21:27:53.119413  240774 kubeadm.go:322] 
	I0108 21:27:53.119545  240774 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:27:53.119564  240774 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 21:27:53.119572  240774 kubeadm.go:322] 
	I0108 21:27:53.119703  240774 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:27:53.119723  240774 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 21:27:53.119753  240774 kubeadm.go:322] 
	I0108 21:27:53.119786  240774 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:27:53.119795  240774 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 21:27:53.119886  240774 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:27:53.119903  240774 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:27:53.119982  240774 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:27:53.119993  240774 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:27:53.120000  240774 kubeadm.go:322] 
	I0108 21:27:53.120070  240774 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 21:27:53.120080  240774 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 21:27:53.120084  240774 kubeadm.go:322] 
	I0108 21:27:53.120164  240774 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:27:53.120207  240774 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:27:53.120232  240774 kubeadm.go:322] 
	I0108 21:27:53.120289  240774 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:27:53.120303  240774 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 21:27:53.120402  240774 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:27:53.120414  240774 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:27:53.120488  240774 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:27:53.120500  240774 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:27:53.120506  240774 kubeadm.go:322] 
	I0108 21:27:53.120620  240774 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:27:53.120639  240774 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:27:53.120744  240774 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:27:53.120757  240774 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 21:27:53.120763  240774 kubeadm.go:322] 
	I0108 21:27:53.120902  240774 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0nmc1l.c2bsjaznc16a705j \
	I0108 21:27:53.120919  240774 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0nmc1l.c2bsjaznc16a705j \
	I0108 21:27:53.121084  240774 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 \
	I0108 21:27:53.121097  240774 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 \
	I0108 21:27:53.121128  240774 kubeadm.go:322] 	--control-plane 
	I0108 21:27:53.121136  240774 command_runner.go:130] > 	--control-plane 
	I0108 21:27:53.121142  240774 kubeadm.go:322] 
	I0108 21:27:53.121258  240774 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:27:53.121289  240774 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:27:53.121328  240774 kubeadm.go:322] 
	I0108 21:27:53.121455  240774 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0nmc1l.c2bsjaznc16a705j \
	I0108 21:27:53.121475  240774 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0nmc1l.c2bsjaznc16a705j \
	I0108 21:27:53.121600  240774 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 
	I0108 21:27:53.121608  240774 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 
	I0108 21:27:53.123393  240774 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 21:27:53.123410  240774 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 21:27:53.123520  240774 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:27:53.123540  240774 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:27:53.123560  240774 cni.go:84] Creating CNI manager for ""
	I0108 21:27:53.123572  240774 cni.go:136] 1 nodes found, recommending kindnet
	I0108 21:27:53.125486  240774 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:27:53.126836  240774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:27:53.130764  240774 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:27:53.130815  240774 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0108 21:27:53.130826  240774 command_runner.go:130] > Device: 37h/55d	Inode: 560014      Links: 1
	I0108 21:27:53.130837  240774 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:27:53.130851  240774 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0108 21:27:53.130864  240774 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0108 21:27:53.130874  240774 command_runner.go:130] > Change: 2024-01-08 21:09:21.577697444 +0000
	I0108 21:27:53.130887  240774 command_runner.go:130] >  Birth: 2024-01-08 21:09:21.553695143 +0000
	I0108 21:27:53.130955  240774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:27:53.130970  240774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:27:53.149192  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:27:53.782781  240774 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 21:27:53.787279  240774 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 21:27:53.793355  240774 command_runner.go:130] > serviceaccount/kindnet created
	I0108 21:27:53.802297  240774 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 21:27:53.806969  240774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:27:53.807076  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-379549 minikube.k8s.io/updated_at=2024_01_08T21_27_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:53.807081  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:53.813695  240774 command_runner.go:130] > -16
	I0108 21:27:53.813731  240774 ops.go:34] apiserver oom_adj: -16
	I0108 21:27:53.914483  240774 command_runner.go:130] > node/multinode-379549 labeled
	I0108 21:27:53.917414  240774 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 21:27:53.917558  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:53.978748  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:54.418133  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:54.481958  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:54.918653  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:54.977874  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:55.417976  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:55.478898  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:55.917686  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:55.978667  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:56.417818  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:56.481532  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:56.918211  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:56.978305  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:57.418610  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:57.480361  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:57.917760  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:57.978746  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:58.418044  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:58.478231  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:58.918395  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:58.981277  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:59.417594  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:59.480442  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:27:59.917609  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:59.979345  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:00.418656  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:00.478883  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:00.917862  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:00.980870  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:01.418111  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:01.477976  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:01.918182  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:01.980046  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:02.417586  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:02.480817  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:02.918536  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:02.984639  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:03.418171  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:03.482688  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:03.918453  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:03.981485  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:04.417690  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:04.481297  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:04.918672  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:04.979749  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:05.418030  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:05.483581  240774 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:28:05.918230  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:05.981258  240774 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 21:28:05.981281  240774 command_runner.go:130] > default   0         0s
	I0108 21:28:05.984162  240774 kubeadm.go:1088] duration metric: took 12.177144422s to wait for elevateKubeSystemPrivileges.
	I0108 21:28:05.984193  240774 kubeadm.go:406] StartCluster complete in 22.162396081s
	I0108 21:28:05.984219  240774 settings.go:142] acquiring lock: {Name:mka49c6122422560714ade880e41fa20698ed59b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:28:05.984296  240774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:28:05.984965  240774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-150013/kubeconfig: {Name:mk7bacc6ac7c9afd0d9363f33909f58b6056dc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:28:05.985191  240774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:28:05.985301  240774 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:28:05.985383  240774 addons.go:69] Setting storage-provisioner=true in profile "multinode-379549"
	I0108 21:28:05.985401  240774 addons.go:69] Setting default-storageclass=true in profile "multinode-379549"
	I0108 21:28:05.985420  240774 addons.go:237] Setting addon storage-provisioner=true in "multinode-379549"
	I0108 21:28:05.985431  240774 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-379549"
	I0108 21:28:05.985405  240774 config.go:182] Loaded profile config "multinode-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:28:05.985503  240774 host.go:66] Checking if "multinode-379549" exists ...
	I0108 21:28:05.985588  240774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:28:05.985836  240774 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Status}}
	I0108 21:28:05.986003  240774 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Status}}
	I0108 21:28:05.985942  240774 kapi.go:59] client config for multinode-379549: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.key", CAFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:28:05.986713  240774 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:28:05.986979  240774 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:28:05.986994  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:05.987002  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:05.987009  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:05.995791  240774 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:28:05.995819  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:05.995826  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:05.995833  240774 round_trippers.go:580]     Content-Length: 291
	I0108 21:28:05.995840  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:05 GMT
	I0108 21:28:05.995848  240774 round_trippers.go:580]     Audit-Id: a8c81666-c0aa-4ab1-b2fd-c42035f72eb4
	I0108 21:28:05.995855  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:05.995863  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:05.995871  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:05.995917  240774 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"def710b2-ad1a-496c-8896-306b3bb5308c","resourceVersion":"264","creationTimestamp":"2024-01-08T21:27:52Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 21:28:05.996433  240774 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"def710b2-ad1a-496c-8896-306b3bb5308c","resourceVersion":"264","creationTimestamp":"2024-01-08T21:27:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 21:28:05.996504  240774 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:28:05.996519  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:05.996530  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:05.996540  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:05.996549  240774 round_trippers.go:473]     Content-Type: application/json
	I0108 21:28:06.002540  240774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:28:06.002565  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:06.002575  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:06.002584  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:06.002592  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:06.002600  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:06.002609  240774 round_trippers.go:580]     Content-Length: 291
	I0108 21:28:06.002622  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:06 GMT
	I0108 21:28:06.002630  240774 round_trippers.go:580]     Audit-Id: 1fe68046-f800-4ecf-9b99-48f155123921
	I0108 21:28:06.002659  240774 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"def710b2-ad1a-496c-8896-306b3bb5308c","resourceVersion":"348","creationTimestamp":"2024-01-08T21:27:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 21:28:06.005647  240774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:28:06.006046  240774 kapi.go:59] client config for multinode-379549: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.key", CAFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:28:06.006335  240774 addons.go:237] Setting addon default-storageclass=true in "multinode-379549"
	I0108 21:28:06.006371  240774 host.go:66] Checking if "multinode-379549" exists ...
	I0108 21:28:06.006719  240774 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Status}}
	I0108 21:28:06.009614  240774 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:28:06.011143  240774 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:28:06.011166  240774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:28:06.011223  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:28:06.025626  240774 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:28:06.028408  240774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:28:06.028483  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:28:06.028775  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:28:06.049342  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:28:06.071977  240774 command_runner.go:130] > apiVersion: v1
	I0108 21:28:06.072000  240774 command_runner.go:130] > data:
	I0108 21:28:06.072006  240774 command_runner.go:130] >   Corefile: |
	I0108 21:28:06.072011  240774 command_runner.go:130] >     .:53 {
	I0108 21:28:06.072017  240774 command_runner.go:130] >         errors
	I0108 21:28:06.072023  240774 command_runner.go:130] >         health {
	I0108 21:28:06.072031  240774 command_runner.go:130] >            lameduck 5s
	I0108 21:28:06.072037  240774 command_runner.go:130] >         }
	I0108 21:28:06.072044  240774 command_runner.go:130] >         ready
	I0108 21:28:06.072058  240774 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 21:28:06.072070  240774 command_runner.go:130] >            pods insecure
	I0108 21:28:06.072081  240774 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 21:28:06.072093  240774 command_runner.go:130] >            ttl 30
	I0108 21:28:06.072105  240774 command_runner.go:130] >         }
	I0108 21:28:06.072116  240774 command_runner.go:130] >         prometheus :9153
	I0108 21:28:06.072128  240774 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 21:28:06.072140  240774 command_runner.go:130] >            max_concurrent 1000
	I0108 21:28:06.072150  240774 command_runner.go:130] >         }
	I0108 21:28:06.072159  240774 command_runner.go:130] >         cache 30
	I0108 21:28:06.072169  240774 command_runner.go:130] >         loop
	I0108 21:28:06.072180  240774 command_runner.go:130] >         reload
	I0108 21:28:06.072190  240774 command_runner.go:130] >         loadbalance
	I0108 21:28:06.072197  240774 command_runner.go:130] >     }
	I0108 21:28:06.072207  240774 command_runner.go:130] > kind: ConfigMap
	I0108 21:28:06.072215  240774 command_runner.go:130] > metadata:
	I0108 21:28:06.072226  240774 command_runner.go:130] >   creationTimestamp: "2024-01-08T21:27:52Z"
	I0108 21:28:06.072236  240774 command_runner.go:130] >   name: coredns
	I0108 21:28:06.072246  240774 command_runner.go:130] >   namespace: kube-system
	I0108 21:28:06.072256  240774 command_runner.go:130] >   resourceVersion: "260"
	I0108 21:28:06.072273  240774 command_runner.go:130] >   uid: e83c441f-efed-4891-b6cf-fb52acea3baa
	I0108 21:28:06.072430  240774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:28:06.220279  240774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:28:06.314263  240774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:28:06.487377  240774 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:28:06.487398  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:06.487407  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:06.487413  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:06.518801  240774 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0108 21:28:06.518830  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:06.518841  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:06.518848  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:06.518855  240774 round_trippers.go:580]     Content-Length: 291
	I0108 21:28:06.518862  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:06 GMT
	I0108 21:28:06.518870  240774 round_trippers.go:580]     Audit-Id: 70854adb-a687-4807-b37a-a6bd24bb4c43
	I0108 21:28:06.518878  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:06.518886  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:06.518917  240774 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"def710b2-ad1a-496c-8896-306b3bb5308c","resourceVersion":"373","creationTimestamp":"2024-01-08T21:27:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 21:28:06.519067  240774 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-379549" context rescaled to 1 replicas
	I0108 21:28:06.519106  240774 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:28:06.520871  240774 out.go:177] * Verifying Kubernetes components...
	I0108 21:28:06.522181  240774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:28:06.820482  240774 command_runner.go:130] > configmap/coredns replaced
	I0108 21:28:06.820531  240774 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0108 21:28:07.138701  240774 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 21:28:07.145615  240774 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 21:28:07.152254  240774 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 21:28:07.159009  240774 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 21:28:07.164957  240774 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 21:28:07.172224  240774 command_runner.go:130] > pod/storage-provisioner created
	I0108 21:28:07.176716  240774 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 21:28:07.176878  240774 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 21:28:07.176891  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:07.176902  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:07.176912  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:07.177221  240774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:28:07.177560  240774 kapi.go:59] client config for multinode-379549: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.key", CAFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:28:07.177891  240774 node_ready.go:35] waiting up to 6m0s for node "multinode-379549" to be "Ready" ...
	I0108 21:28:07.177981  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:07.177991  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:07.178003  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:07.178016  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:07.178560  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:07.178577  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:07.178586  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:07.178594  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:07.178605  240774 round_trippers.go:580]     Content-Length: 1273
	I0108 21:28:07.178617  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:07 GMT
	I0108 21:28:07.178627  240774 round_trippers.go:580]     Audit-Id: 03b9697e-c24d-477f-b7b1-d1a7228dac59
	I0108 21:28:07.178638  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:07.178650  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:07.178685  240774 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"standard","uid":"69bfc917-1b6c-445d-a399-887a26b6d886","resourceVersion":"397","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 21:28:07.179021  240774 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"69bfc917-1b6c-445d-a399-887a26b6d886","resourceVersion":"397","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 21:28:07.179074  240774 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 21:28:07.179087  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:07.179097  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:07.179105  240774 round_trippers.go:473]     Content-Type: application/json
	I0108 21:28:07.179114  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:07.180033  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:07.180054  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:07.180064  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:07.180074  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:07.180082  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:07.180090  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:07 GMT
	I0108 21:28:07.180102  240774 round_trippers.go:580]     Audit-Id: 02a9ef80-8cda-47f6-9c7c-720befbcad18
	I0108 21:28:07.180110  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:07.180336  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:07.181291  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:07.181305  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:07.181312  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:07.181317  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:07.181322  240774 round_trippers.go:580]     Content-Length: 1220
	I0108 21:28:07.181327  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:07 GMT
	I0108 21:28:07.181335  240774 round_trippers.go:580]     Audit-Id: b201e75b-4e4d-43c6-b669-ac696758fee9
	I0108 21:28:07.181341  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:07.181350  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:07.181373  240774 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"69bfc917-1b6c-445d-a399-887a26b6d886","resourceVersion":"397","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 21:28:07.183301  240774 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:28:07.184637  240774 addons.go:508] enable addons completed in 1.199340618s: enabled=[storage-provisioner default-storageclass]
	I0108 21:28:07.679037  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:07.679056  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:07.679065  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:07.679071  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:07.681464  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:07.681487  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:07.681498  240774 round_trippers.go:580]     Audit-Id: 7054e025-428f-47d4-9334-5ab207e197e2
	I0108 21:28:07.681507  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:07.681515  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:07.681523  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:07.681531  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:07.681543  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:07 GMT
	I0108 21:28:07.681689  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:08.178165  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:08.178189  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:08.178197  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:08.178203  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:08.180533  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:08.180558  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:08.180568  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:08 GMT
	I0108 21:28:08.180578  240774 round_trippers.go:580]     Audit-Id: 27835ec2-f132-4e4b-a812-03778c4b8da6
	I0108 21:28:08.180585  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:08.180593  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:08.180603  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:08.180614  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:08.180755  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:08.678286  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:08.678309  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:08.678318  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:08.678325  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:08.680700  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:08.680719  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:08.680726  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:08.680731  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:08.680737  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:08.680742  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:08.680747  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:08 GMT
	I0108 21:28:08.680752  240774 round_trippers.go:580]     Audit-Id: d715439f-43bf-46a9-ba63-26c1ac18689d
	I0108 21:28:08.680877  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:09.178379  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:09.178401  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:09.178409  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:09.178415  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:09.180553  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:09.180572  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:09.180579  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:09 GMT
	I0108 21:28:09.180599  240774 round_trippers.go:580]     Audit-Id: 0c126072-22f3-466d-ac30-40adb3cd4912
	I0108 21:28:09.180606  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:09.180614  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:09.180625  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:09.180642  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:09.180787  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:09.181222  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:09.678340  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:09.678361  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:09.678370  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:09.678376  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:09.680799  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:09.680829  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:09.680840  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:09 GMT
	I0108 21:28:09.680849  240774 round_trippers.go:580]     Audit-Id: 8e9e8504-4c84-4bc9-911f-1fae0dc54633
	I0108 21:28:09.680858  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:09.680871  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:09.680883  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:09.680894  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:09.680989  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:10.178550  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:10.178572  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:10.178580  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:10.178609  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:10.181006  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:10.181025  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:10.181031  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:10.181037  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:10.181042  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:10 GMT
	I0108 21:28:10.181047  240774 round_trippers.go:580]     Audit-Id: e978b304-8c3b-49f9-8ef9-120216264198
	I0108 21:28:10.181052  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:10.181057  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:10.181154  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:10.678761  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:10.678786  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:10.678794  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:10.678800  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:10.681002  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:10.681020  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:10.681027  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:10.681035  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:10.681040  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:10 GMT
	I0108 21:28:10.681045  240774 round_trippers.go:580]     Audit-Id: 9beedb13-c1a8-4675-b5c2-ce6b5d8dfac9
	I0108 21:28:10.681050  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:10.681055  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:10.681155  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:11.178114  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:11.178135  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:11.178143  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:11.178150  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:11.180509  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:11.180539  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:11.180551  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:11.180561  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:11.180575  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:11 GMT
	I0108 21:28:11.180584  240774 round_trippers.go:580]     Audit-Id: 03a698d9-cc02-4591-bd7d-7cf13c7272d6
	I0108 21:28:11.180596  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:11.180606  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:11.180766  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:11.678250  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:11.678281  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:11.678289  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:11.678296  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:11.680535  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:11.680559  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:11.680570  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:11.680580  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:11.680593  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:11.680603  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:11.680612  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:11 GMT
	I0108 21:28:11.680623  240774 round_trippers.go:580]     Audit-Id: 8ee3b394-856e-4ce4-a879-50574680dd8e
	I0108 21:28:11.680778  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:11.681196  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:12.178299  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:12.178320  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:12.178328  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:12.178334  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:12.180646  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:12.180670  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:12.180680  240774 round_trippers.go:580]     Audit-Id: de998a3d-0f12-475f-9726-c625b7aba221
	I0108 21:28:12.180689  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:12.180698  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:12.180706  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:12.180714  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:12.180725  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:12 GMT
	I0108 21:28:12.180943  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:12.678435  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:12.678457  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:12.678465  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:12.678472  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:12.680684  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:12.680702  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:12.680709  240774 round_trippers.go:580]     Audit-Id: 6769d4c0-2c8f-4380-8b9e-46a7123df423
	I0108 21:28:12.680714  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:12.680719  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:12.680725  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:12.680730  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:12.680735  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:12 GMT
	I0108 21:28:12.680915  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:13.178524  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:13.178546  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:13.178555  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:13.178561  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:13.180996  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:13.181017  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:13.181028  240774 round_trippers.go:580]     Audit-Id: c899e29e-2bad-484d-bbe3-f960f079b1ff
	I0108 21:28:13.181037  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:13.181047  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:13.181059  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:13.181065  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:13.181070  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:13 GMT
	I0108 21:28:13.181196  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:13.678692  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:13.678716  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:13.678728  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:13.678736  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:13.681017  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:13.681037  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:13.681047  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:13.681056  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:13.681065  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:13 GMT
	I0108 21:28:13.681075  240774 round_trippers.go:580]     Audit-Id: f3a7434d-9d90-433e-af8f-e47454476dc9
	I0108 21:28:13.681090  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:13.681096  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:13.681224  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:13.681593  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:14.178685  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:14.178704  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:14.178716  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:14.178724  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:14.181014  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:14.181041  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:14.181052  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:14.181060  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:14.181068  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:14.181075  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:14 GMT
	I0108 21:28:14.181083  240774 round_trippers.go:580]     Audit-Id: e4eecc35-08b7-4cff-81cc-ac2de5616f2b
	I0108 21:28:14.181091  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:14.181299  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:14.678682  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:14.678703  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:14.678712  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:14.678717  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:14.681018  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:14.681040  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:14.681052  240774 round_trippers.go:580]     Audit-Id: c49f716d-9db7-4603-bd45-fbe80c6ad749
	I0108 21:28:14.681061  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:14.681070  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:14.681083  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:14.681091  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:14.681103  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:14 GMT
	I0108 21:28:14.681274  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:15.178745  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:15.178776  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:15.178784  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:15.178791  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:15.181124  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:15.181143  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:15.181150  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:15.181155  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:15.181162  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:15.181170  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:15 GMT
	I0108 21:28:15.181181  240774 round_trippers.go:580]     Audit-Id: c21693bb-a218-4455-8569-99e43b4b79f2
	I0108 21:28:15.181191  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:15.181347  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:15.678688  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:15.678710  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:15.678719  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:15.678725  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:15.680954  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:15.680971  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:15.680978  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:15.680984  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:15 GMT
	I0108 21:28:15.680989  240774 round_trippers.go:580]     Audit-Id: 96f82015-5a8e-4c78-9600-375e85a83478
	I0108 21:28:15.680994  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:15.681000  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:15.681008  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:15.681151  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:15.681622  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:16.179094  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:16.179114  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:16.179122  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:16.179129  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:16.181488  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:16.181514  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:16.181523  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:16.181532  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:16.181541  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:16.181549  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:16 GMT
	I0108 21:28:16.181557  240774 round_trippers.go:580]     Audit-Id: ed1b5f07-7e1b-4c04-a852-7bbf64f0c382
	I0108 21:28:16.181573  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:16.181725  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:16.678210  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:16.678236  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:16.678244  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:16.678251  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:16.680664  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:16.680684  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:16.680690  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:16.680696  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:16.680701  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:16 GMT
	I0108 21:28:16.680706  240774 round_trippers.go:580]     Audit-Id: 054fe5cb-0a61-443b-9e62-97cfb38c2870
	I0108 21:28:16.680711  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:16.680716  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:16.680917  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:17.178450  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:17.178473  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:17.178481  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:17.178487  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:17.180858  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:17.180883  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:17.180894  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:17.180902  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:17.180910  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:17 GMT
	I0108 21:28:17.180918  240774 round_trippers.go:580]     Audit-Id: f9ed76bc-b07d-4dad-aad7-eebc5a4eb88c
	I0108 21:28:17.180926  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:17.180942  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:17.181140  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:17.678707  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:17.678730  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:17.678752  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:17.678759  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:17.681031  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:17.681056  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:17.681066  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:17.681074  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:17.681081  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:17.681088  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:17 GMT
	I0108 21:28:17.681096  240774 round_trippers.go:580]     Audit-Id: a4792e06-a14a-4019-8ad7-678d27a98b70
	I0108 21:28:17.681107  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:17.681262  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:17.681813  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:18.178942  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:18.178962  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:18.178971  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:18.178977  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:18.181353  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:18.181378  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:18.181388  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:18 GMT
	I0108 21:28:18.181397  240774 round_trippers.go:580]     Audit-Id: 5362dbaf-ce60-49d1-9a64-97f04655228b
	I0108 21:28:18.181405  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:18.181417  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:18.181435  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:18.181462  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:18.181606  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:18.678948  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:18.678972  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:18.678980  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:18.678987  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:18.681297  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:18.681322  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:18.681333  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:18 GMT
	I0108 21:28:18.681342  240774 round_trippers.go:580]     Audit-Id: 739cbc75-7034-45cc-bc24-08887b56615a
	I0108 21:28:18.681353  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:18.681362  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:18.681370  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:18.681378  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:18.681527  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:19.178073  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:19.178097  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:19.178105  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:19.178111  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:19.180464  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:19.180483  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:19.180489  240774 round_trippers.go:580]     Audit-Id: c2f27209-4737-4587-b42f-04686349d66a
	I0108 21:28:19.180495  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:19.180500  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:19.180508  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:19.180516  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:19.180525  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:19 GMT
	I0108 21:28:19.180691  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:19.678245  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:19.678270  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:19.678282  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:19.678290  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:19.680541  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:19.680561  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:19.680573  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:19.680581  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:19.680589  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:19.680597  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:19.680612  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:19 GMT
	I0108 21:28:19.680621  240774 round_trippers.go:580]     Audit-Id: 9efe4e0b-3b63-4a5c-a25e-71462bb3fd19
	I0108 21:28:19.680723  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:20.178350  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:20.178375  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:20.178383  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:20.178389  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:20.180688  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:20.180715  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:20.180726  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:20.180735  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:20.180743  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:20 GMT
	I0108 21:28:20.180751  240774 round_trippers.go:580]     Audit-Id: 7eba7be2-a9d3-4764-be02-fa39d88233fa
	I0108 21:28:20.180763  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:20.180772  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:20.180938  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:20.181269  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:20.678483  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:20.678507  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:20.678518  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:20.678528  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:20.680694  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:20.680722  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:20.680733  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:20.680742  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:20.680748  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:20.680755  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:20 GMT
	I0108 21:28:20.680760  240774 round_trippers.go:580]     Audit-Id: 87db2a55-30fd-4ab4-9250-c80f016bda93
	I0108 21:28:20.680765  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:20.680946  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:21.178975  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:21.179001  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:21.179014  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:21.179024  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:21.181622  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:21.181642  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:21.181649  240774 round_trippers.go:580]     Audit-Id: 1f9d316e-41cf-4bf4-8eb6-1bb8631cf62d
	I0108 21:28:21.181654  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:21.181659  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:21.181665  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:21.181670  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:21.181675  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:21 GMT
	I0108 21:28:21.181882  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:21.678477  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:21.678501  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:21.678511  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:21.678518  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:21.680981  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:21.681001  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:21.681008  240774 round_trippers.go:580]     Audit-Id: 05bb6fbc-07d3-4caa-ac23-640efefe7638
	I0108 21:28:21.681014  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:21.681029  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:21.681037  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:21.681048  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:21.681056  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:21 GMT
	I0108 21:28:21.681193  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:22.178699  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:22.178724  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:22.178733  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:22.178739  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:22.180957  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:22.180982  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:22.181004  240774 round_trippers.go:580]     Audit-Id: 0ce0a785-afd6-4b26-a9ee-c58e33b146a9
	I0108 21:28:22.181013  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:22.181022  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:22.181032  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:22.181043  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:22.181056  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:22 GMT
	I0108 21:28:22.181197  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:22.181641  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:22.678671  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:22.678691  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:22.678700  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:22.678706  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:22.680948  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:22.680972  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:22.680982  240774 round_trippers.go:580]     Audit-Id: 723f6842-c718-4646-95f6-c48462c8fe62
	I0108 21:28:22.680990  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:22.680997  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:22.681007  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:22.681020  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:22.681028  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:22 GMT
	I0108 21:28:22.681226  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:23.178715  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:23.178748  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:23.178758  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:23.178764  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:23.181345  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:23.181363  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:23.181370  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:23.181375  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:23.181386  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:23.181393  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:23 GMT
	I0108 21:28:23.181404  240774 round_trippers.go:580]     Audit-Id: 0ca3eebe-df3d-4b21-9cf9-29b107cbd098
	I0108 21:28:23.181416  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:23.181608  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:23.678181  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:23.678206  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:23.678214  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:23.678221  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:23.680480  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:23.680499  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:23.680506  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:23.680512  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:23.680517  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:23 GMT
	I0108 21:28:23.680524  240774 round_trippers.go:580]     Audit-Id: 7c77d6e6-058c-4adb-ab72-8e4d64eb436c
	I0108 21:28:23.680532  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:23.680539  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:23.680682  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:24.178338  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:24.178367  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:24.178376  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:24.178384  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:24.180807  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:24.180829  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:24.180836  240774 round_trippers.go:580]     Audit-Id: 5feb91ef-2e25-4bdf-992e-88cd8ce2cef8
	I0108 21:28:24.180841  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:24.180846  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:24.180854  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:24.180862  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:24.180875  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:24 GMT
	I0108 21:28:24.181009  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:24.679123  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:24.679147  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:24.679155  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:24.679162  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:24.681619  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:24.681646  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:24.681656  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:24 GMT
	I0108 21:28:24.681665  240774 round_trippers.go:580]     Audit-Id: 20503e89-49e3-4c2c-a4af-8db27954c957
	I0108 21:28:24.681672  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:24.681685  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:24.681694  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:24.681706  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:24.681863  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:24.682206  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:25.178409  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:25.178432  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:25.178459  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:25.178470  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:25.181600  240774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:28:25.181624  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:25.181637  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:25.181645  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:25.181653  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:25 GMT
	I0108 21:28:25.181661  240774 round_trippers.go:580]     Audit-Id: 72354f75-875e-44c2-9e4e-e2bc48e6539d
	I0108 21:28:25.181672  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:25.181679  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:25.181913  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:25.678430  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:25.678460  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:25.678473  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:25.678485  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:25.680772  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:25.680792  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:25.680799  240774 round_trippers.go:580]     Audit-Id: 7bf3cdde-c755-4612-b28b-c08f200d1212
	I0108 21:28:25.680804  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:25.680809  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:25.680814  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:25.680819  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:25.680824  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:25 GMT
	I0108 21:28:25.680937  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:26.178888  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:26.178917  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:26.178929  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:26.178940  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:26.181289  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:26.181309  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:26.181316  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:26.181321  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:26 GMT
	I0108 21:28:26.181326  240774 round_trippers.go:580]     Audit-Id: cdf6709d-1e50-4ab6-af92-0da9a046cdec
	I0108 21:28:26.181340  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:26.181345  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:26.181351  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:26.181704  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:26.678399  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:26.678422  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:26.678430  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:26.678436  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:26.680722  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:26.680742  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:26.680749  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:26.680758  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:26.680766  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:26.680774  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:26 GMT
	I0108 21:28:26.680782  240774 round_trippers.go:580]     Audit-Id: 880ff424-de68-4e96-be06-64bac67f6900
	I0108 21:28:26.680789  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:26.680931  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:27.178494  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:27.178520  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:27.178528  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:27.178536  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:27.180896  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:27.180918  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:27.180927  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:27.180935  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:27 GMT
	I0108 21:28:27.180945  240774 round_trippers.go:580]     Audit-Id: 2dcbf7e9-da5d-468b-a68e-0b1c9021ff64
	I0108 21:28:27.180953  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:27.180960  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:27.180973  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:27.181124  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:27.181482  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:27.678737  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:27.678757  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:27.678765  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:27.678771  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:27.680886  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:27.680905  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:27.680912  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:27.680918  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:27.680923  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:27.680928  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:27 GMT
	I0108 21:28:27.680933  240774 round_trippers.go:580]     Audit-Id: 1df93dc9-92d2-457b-aa59-a30933dc05ca
	I0108 21:28:27.680939  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:27.681145  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:28.178675  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:28.178698  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:28.178706  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:28.178713  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:28.181011  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:28.181037  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:28.181046  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:28.181055  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:28.181064  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:28 GMT
	I0108 21:28:28.181072  240774 round_trippers.go:580]     Audit-Id: c25f35b7-5a90-4716-bde9-dcc9f325c77a
	I0108 21:28:28.181080  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:28.181089  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:28.181280  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:28.678661  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:28.678686  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:28.678716  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:28.678723  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:28.680998  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:28.681019  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:28.681026  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:28 GMT
	I0108 21:28:28.681031  240774 round_trippers.go:580]     Audit-Id: b24f529a-4e48-4a06-b8b4-d944688d881a
	I0108 21:28:28.681036  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:28.681041  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:28.681046  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:28.681052  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:28.681213  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:29.178848  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:29.178874  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:29.178885  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:29.178894  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:29.181293  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:29.181315  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:29.181324  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:29.181329  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:29.181335  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:29.181342  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:29 GMT
	I0108 21:28:29.181349  240774 round_trippers.go:580]     Audit-Id: 46adca88-48f0-4f41-a5e5-ac7ddfb3f1b3
	I0108 21:28:29.181357  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:29.181551  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:29.181967  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:29.678133  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:29.678155  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:29.678163  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:29.678169  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:29.680309  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:29.680331  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:29.680340  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:29.680348  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:29.680355  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:29.680363  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:29 GMT
	I0108 21:28:29.680371  240774 round_trippers.go:580]     Audit-Id: a5c36cc1-3e4a-4fea-967d-306c8c1d68e7
	I0108 21:28:29.680391  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:29.680512  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:30.178132  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:30.178156  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:30.178165  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:30.178171  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:30.180485  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:30.180512  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:30.180522  240774 round_trippers.go:580]     Audit-Id: 4539bee9-1d86-4e03-b971-9d85e94e2612
	I0108 21:28:30.180528  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:30.180533  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:30.180538  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:30.180546  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:30.180554  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:30 GMT
	I0108 21:28:30.180722  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:30.678276  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:30.678299  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:30.678307  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:30.678314  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:30.680379  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:30.680401  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:30.680411  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:30.680418  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:30.680425  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:30.680433  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:30 GMT
	I0108 21:28:30.680441  240774 round_trippers.go:580]     Audit-Id: fa0c7f28-116a-4e29-96ca-fa311c4cca17
	I0108 21:28:30.680454  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:30.680597  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:31.178696  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:31.178735  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:31.178747  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:31.178754  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:31.181152  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:31.181177  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:31.181184  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:31.181190  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:31.181195  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:31.181202  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:31.181207  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:31 GMT
	I0108 21:28:31.181213  240774 round_trippers.go:580]     Audit-Id: ebd1067e-6f48-44da-957a-f6db38d2c3ec
	I0108 21:28:31.181375  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:31.678689  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:31.678712  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:31.678720  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:31.678727  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:31.680941  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:31.680966  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:31.680976  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:31.680985  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:31.680993  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:31.681001  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:31 GMT
	I0108 21:28:31.681014  240774 round_trippers.go:580]     Audit-Id: e105d8d4-ea86-46f4-acd3-acb691fbc5f2
	I0108 21:28:31.681021  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:31.681131  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:31.681470  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:32.178678  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:32.178697  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:32.178705  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:32.178711  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:32.180962  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:32.180980  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:32.180987  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:32 GMT
	I0108 21:28:32.180995  240774 round_trippers.go:580]     Audit-Id: 1cb2af90-8cbc-4995-9d06-a21cb85fa4e5
	I0108 21:28:32.181005  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:32.181013  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:32.181022  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:32.181033  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:32.181185  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:32.678690  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:32.678712  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:32.678722  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:32.678728  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:32.681017  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:32.681041  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:32.681051  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:32.681058  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:32.681067  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:32.681076  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:32.681081  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:32 GMT
	I0108 21:28:32.681090  240774 round_trippers.go:580]     Audit-Id: dcc80b90-7859-45b3-b4cc-5e301ecaf632
	I0108 21:28:32.681237  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:33.178786  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:33.178814  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:33.178827  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:33.178833  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:33.181276  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:33.181302  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:33.181318  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:33.181327  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:33.181338  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:33.181347  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:33 GMT
	I0108 21:28:33.181359  240774 round_trippers.go:580]     Audit-Id: da1974a7-0f26-4de8-a163-e0c9d49eed06
	I0108 21:28:33.181368  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:33.181546  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:33.678075  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:33.678102  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:33.678111  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:33.678117  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:33.680337  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:33.680354  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:33.680362  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:33.680367  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:33.680373  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:33.680378  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:33.680383  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:33 GMT
	I0108 21:28:33.680389  240774 round_trippers.go:580]     Audit-Id: e4961ecb-6a04-459f-87b0-b40b927e1580
	I0108 21:28:33.680678  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:34.178283  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:34.178308  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:34.178317  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:34.178323  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:34.180727  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:34.180746  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:34.180753  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:34.180760  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:34.180765  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:34 GMT
	I0108 21:28:34.180771  240774 round_trippers.go:580]     Audit-Id: 80352311-66a6-437d-86a5-7be6e6f4cf54
	I0108 21:28:34.180777  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:34.180782  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:34.180912  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:34.181264  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:34.678122  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:34.678144  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:34.678152  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:34.678159  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:34.680350  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:34.680368  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:34.680377  240774 round_trippers.go:580]     Audit-Id: 2c4c8b58-84b1-4d17-9019-41a7598a085f
	I0108 21:28:34.680385  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:34.680393  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:34.680401  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:34.680408  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:34.680416  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:34 GMT
	I0108 21:28:34.680563  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:35.178089  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:35.178115  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:35.178124  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:35.178130  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:35.180272  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:35.180297  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:35.180308  240774 round_trippers.go:580]     Audit-Id: 869b40a6-4da8-4b4a-a161-cdb6b647d5f2
	I0108 21:28:35.180317  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:35.180326  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:35.180335  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:35.180343  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:35.180352  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:35 GMT
	I0108 21:28:35.180515  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:35.678707  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:35.678738  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:35.678747  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:35.678753  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:35.680791  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:35.680810  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:35.680817  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:35 GMT
	I0108 21:28:35.680823  240774 round_trippers.go:580]     Audit-Id: d91b499b-f845-46ea-a379-4a971422ada1
	I0108 21:28:35.680828  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:35.680833  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:35.680838  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:35.680844  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:35.680982  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:36.178829  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:36.178873  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:36.178881  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:36.178887  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:36.181478  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:36.181501  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:36.181509  240774 round_trippers.go:580]     Audit-Id: 9b346ae6-2940-4312-8b84-71cc682b3440
	I0108 21:28:36.181514  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:36.181521  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:36.181530  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:36.181537  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:36.181547  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:36 GMT
	I0108 21:28:36.181720  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:36.182034  240774 node_ready.go:58] node "multinode-379549" has status "Ready":"False"
	I0108 21:28:36.678277  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:36.678297  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:36.678305  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:36.678311  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:36.680480  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:36.680499  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:36.680509  240774 round_trippers.go:580]     Audit-Id: 7b5a2c8f-59ed-4c80-8d78-ee8cb7cb7ed1
	I0108 21:28:36.680518  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:36.680528  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:36.680537  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:36.680551  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:36.680559  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:36 GMT
	I0108 21:28:36.680708  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:37.178270  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:37.178296  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:37.178305  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:37.178311  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:37.180553  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:37.180574  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:37.180581  240774 round_trippers.go:580]     Audit-Id: 0a12fda6-52d8-4c21-874e-5f178d530d76
	I0108 21:28:37.180592  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:37.180602  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:37.180609  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:37.180617  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:37.180626  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:37 GMT
	I0108 21:28:37.180820  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"338","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 21:28:37.678299  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:37.678322  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:37.678330  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:37.678338  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:37.680431  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:37.680460  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:37.680471  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:37 GMT
	I0108 21:28:37.680479  240774 round_trippers.go:580]     Audit-Id: 8262c331-7852-4b27-8118-10f8e7d3f149
	I0108 21:28:37.680487  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:37.680495  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:37.680506  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:37.680514  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:37.680659  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"421","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 6019 chars]
	I0108 21:28:37.681075  240774 node_ready.go:49] node "multinode-379549" has status "Ready":"True"
	I0108 21:28:37.681098  240774 node_ready.go:38] duration metric: took 30.50318403s waiting for node "multinode-379549" to be "Ready" ...
	I0108 21:28:37.681110  240774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:28:37.681191  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 21:28:37.681202  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:37.681213  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:37.681230  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:37.684645  240774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:28:37.684670  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:37.684679  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:37.684685  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:37.684690  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:37 GMT
	I0108 21:28:37.684701  240774 round_trippers.go:580]     Audit-Id: a30f0165-e718-45b4-917b-5917b98b5cdb
	I0108 21:28:37.684709  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:37.684717  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:37.685247  240774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"381","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52943 chars]
	I0108 21:28:37.688300  240774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-72pdc" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:37.688414  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-72pdc
	I0108 21:28:37.688428  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:37.688438  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:37.688446  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:37.690415  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:37.690434  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:37.690443  240774 round_trippers.go:580]     Audit-Id: 9e7088b9-5b45-4fc7-9e03-2d6582f29ca8
	I0108 21:28:37.690451  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:37.690459  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:37.690467  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:37.690476  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:37.690489  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:37 GMT
	I0108 21:28:37.690602  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"381","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4943 chars]
	I0108 21:28:38.189255  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-72pdc
	I0108 21:28:38.189285  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:38.189299  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:38.189308  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:38.191810  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:38.191844  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:38.191855  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:38.191864  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:38.191873  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:38.191891  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:38 GMT
	I0108 21:28:38.191899  240774 round_trippers.go:580]     Audit-Id: 3d2a5f78-fad7-4e15-ade5-5da11135679f
	I0108 21:28:38.191906  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:38.192027  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"428","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 21:28:38.192466  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:38.192479  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:38.192486  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:38.192492  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:38.194598  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:38.194620  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:38.194630  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:38.194639  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:38 GMT
	I0108 21:28:38.194651  240774 round_trippers.go:580]     Audit-Id: cf78d26c-7f85-4c1b-8d68-9d043eceb482
	I0108 21:28:38.194671  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:38.194680  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:38.194688  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:38.194804  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:38.689501  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-72pdc
	I0108 21:28:38.689528  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:38.689541  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:38.689551  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:38.691789  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:38.691821  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:38.691828  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:38 GMT
	I0108 21:28:38.691833  240774 round_trippers.go:580]     Audit-Id: 86a3b2b5-0f3a-4880-810f-a330197fb9d6
	I0108 21:28:38.691838  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:38.691846  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:38.691854  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:38.691861  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:38.691995  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"428","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 21:28:38.692473  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:38.692491  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:38.692499  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:38.692507  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:38.694351  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:38.694366  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:38.694372  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:38.694378  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:38.694384  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:38.694390  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:38 GMT
	I0108 21:28:38.694397  240774 round_trippers.go:580]     Audit-Id: 194f191c-333b-40ab-b025-80d31114ada2
	I0108 21:28:38.694402  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:38.694516  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:39.189213  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-72pdc
	I0108 21:28:39.189240  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.189251  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.189260  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.191691  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:39.191711  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.191718  240774 round_trippers.go:580]     Audit-Id: ccd1e775-074c-41d0-bff5-10526f0534de
	I0108 21:28:39.191724  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.191729  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.191734  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.191741  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.191746  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.191964  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"441","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 21:28:39.192454  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:39.192469  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.192479  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.192488  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.194310  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.194328  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.194335  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.194341  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.194346  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.194351  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.194356  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.194361  240774 round_trippers.go:580]     Audit-Id: e7617ae5-91d1-4ecd-be94-74d7bb607f96
	I0108 21:28:39.194509  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:39.194806  240774 pod_ready.go:92] pod "coredns-5dd5756b68-72pdc" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:39.194821  240774 pod_ready.go:81] duration metric: took 1.506497989s waiting for pod "coredns-5dd5756b68-72pdc" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.194830  240774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.194880  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-379549
	I0108 21:28:39.194887  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.194894  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.194900  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.196582  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.196599  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.196606  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.196611  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.196616  240774 round_trippers.go:580]     Audit-Id: e1a49735-198b-4587-8f7e-9fd72be1074d
	I0108 21:28:39.196623  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.196631  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.196641  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.196807  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-379549","namespace":"kube-system","uid":"15613f97-4ce1-40e7-9477-83067c6da0d5","resourceVersion":"331","creationTimestamp":"2024-01-08T21:27:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"63b952661100faa87b3a92441ecb5e45","kubernetes.io/config.mirror":"63b952661100faa87b3a92441ecb5e45","kubernetes.io/config.seen":"2024-01-08T21:27:52.971924843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 21:28:39.197192  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:39.197206  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.197213  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.197222  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.199138  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.199153  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.199159  240774 round_trippers.go:580]     Audit-Id: 493c6f60-e596-4b73-920d-c740f0d15ebd
	I0108 21:28:39.199165  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.199171  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.199177  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.199186  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.199196  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.199361  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:39.199659  240774 pod_ready.go:92] pod "etcd-multinode-379549" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:39.199675  240774 pod_ready.go:81] duration metric: took 4.839419ms waiting for pod "etcd-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.199687  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.199752  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-379549
	I0108 21:28:39.199760  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.199767  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.199775  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.201428  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.201452  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.201462  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.201472  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.201483  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.201494  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.201506  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.201518  240774 round_trippers.go:580]     Audit-Id: 710bf6f6-a752-4ad3-a2b0-e0407a765843
	I0108 21:28:39.201675  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-379549","namespace":"kube-system","uid":"904d4735-a5db-4779-a543-37219944e6ad","resourceVersion":"298","creationTimestamp":"2024-01-08T21:27:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"69032da1baa3b087de5b0ec3fd7fdd38","kubernetes.io/config.mirror":"69032da1baa3b087de5b0ec3fd7fdd38","kubernetes.io/config.seen":"2024-01-08T21:27:52.971927859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 21:28:39.202156  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:39.202174  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.202185  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.202194  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.203676  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.203696  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.203706  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.203715  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.203724  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.203740  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.203748  240774 round_trippers.go:580]     Audit-Id: bfd2727d-3f95-46f3-b2da-2bc5eafdf6c4
	I0108 21:28:39.203760  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.203866  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:39.204134  240774 pod_ready.go:92] pod "kube-apiserver-multinode-379549" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:39.204149  240774 pod_ready.go:81] duration metric: took 4.448864ms waiting for pod "kube-apiserver-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.204158  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.204205  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-379549
	I0108 21:28:39.204212  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.204219  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.204225  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.205945  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.205960  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.205966  240774 round_trippers.go:580]     Audit-Id: c1b8cb80-ca6e-4571-bbb5-b8a6e8026cc9
	I0108 21:28:39.205984  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.205992  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.206003  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.206015  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.206024  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.206280  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-379549","namespace":"kube-system","uid":"c1f82f54-8b68-4da1-bfd1-f70984dc7718","resourceVersion":"295","creationTimestamp":"2024-01-08T21:27:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8fada1f4a8cbf7faf3b4e40829defd17","kubernetes.io/config.mirror":"8fada1f4a8cbf7faf3b4e40829defd17","kubernetes.io/config.seen":"2024-01-08T21:27:52.971928939Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 21:28:39.206652  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:39.206667  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.206677  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.206686  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.208259  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.208274  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.208283  240774 round_trippers.go:580]     Audit-Id: 970e3c36-80e4-4870-a10e-a12e229632e9
	I0108 21:28:39.208292  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.208308  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.208316  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.208325  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.208330  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.208418  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:39.208681  240774 pod_ready.go:92] pod "kube-controller-manager-multinode-379549" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:39.208696  240774 pod_ready.go:81] duration metric: took 4.528255ms waiting for pod "kube-controller-manager-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.208705  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zqbsv" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.208748  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zqbsv
	I0108 21:28:39.208755  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.208762  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.208768  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.210372  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.210391  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.210400  240774 round_trippers.go:580]     Audit-Id: daf744c7-5b3a-427b-b0eb-a2542568c353
	I0108 21:28:39.210409  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.210417  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.210425  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.210438  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.210446  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.210553  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zqbsv","generateName":"kube-proxy-","namespace":"kube-system","uid":"44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9","resourceVersion":"398","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2948c2ab-b26e-4614-b6ae-5a133350e7b7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2948c2ab-b26e-4614-b6ae-5a133350e7b7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 21:28:39.210963  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:39.210978  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.210985  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.210990  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.212462  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:39.212485  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.212495  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.212504  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.212513  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.212522  240774 round_trippers.go:580]     Audit-Id: 9e9e0871-dbbc-4b70-a268-0bb25c8f4ceb
	I0108 21:28:39.212530  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.212542  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.212675  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:39.212971  240774 pod_ready.go:92] pod "kube-proxy-zqbsv" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:39.212987  240774 pod_ready.go:81] duration metric: took 4.276668ms waiting for pod "kube-proxy-zqbsv" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.212995  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.389315  240774 request.go:629] Waited for 176.258571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-379549
	I0108 21:28:39.389396  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-379549
	I0108 21:28:39.389404  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.389413  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.389421  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.391737  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:39.391756  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.391763  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.391768  240774 round_trippers.go:580]     Audit-Id: 6c499853-01fc-4d0f-90c4-b6f5b56c603e
	I0108 21:28:39.391774  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.391782  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.391790  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.391799  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.391936  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-379549","namespace":"kube-system","uid":"8c5d7d7d-f49a-427d-b2c2-72db08b9934f","resourceVersion":"326","creationTimestamp":"2024-01-08T21:27:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"43bec5e36c6ff7f4f3ffbf0511cc280c","kubernetes.io/config.mirror":"43bec5e36c6ff7f4f3ffbf0511cc280c","kubernetes.io/config.seen":"2024-01-08T21:27:47.325897436Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 21:28:39.589691  240774 request.go:629] Waited for 197.35247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:39.589767  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:39.589774  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.589782  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.589791  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.592020  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:39.592038  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.592045  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.592053  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.592061  240774 round_trippers.go:580]     Audit-Id: b222dfcb-bb1f-482c-b5e9-17ee2a583c1c
	I0108 21:28:39.592071  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.592080  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.592090  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.592190  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:39.592511  240774 pod_ready.go:92] pod "kube-scheduler-multinode-379549" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:39.592529  240774 pod_ready.go:81] duration metric: took 379.528891ms waiting for pod "kube-scheduler-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:39.592539  240774 pod_ready.go:38] duration metric: took 1.911415208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:28:39.592557  240774 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:28:39.592606  240774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:28:39.602908  240774 command_runner.go:130] > 1443
	I0108 21:28:39.602943  240774 api_server.go:72] duration metric: took 33.083809434s to wait for apiserver process to appear ...
	I0108 21:28:39.602952  240774 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:28:39.602970  240774 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 21:28:39.608467  240774 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 21:28:39.608528  240774 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0108 21:28:39.608535  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.608543  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.608552  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.609400  240774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0108 21:28:39.609419  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.609429  240774 round_trippers.go:580]     Audit-Id: 28876d4c-0c91-4232-8888-8d97afe15784
	I0108 21:28:39.609438  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.609462  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.609474  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.609486  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.609498  240774 round_trippers.go:580]     Content-Length: 264
	I0108 21:28:39.609508  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.609531  240774 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 21:28:39.609615  240774 api_server.go:141] control plane version: v1.28.4
	I0108 21:28:39.609641  240774 api_server.go:131] duration metric: took 6.684806ms to wait for apiserver health ...
	I0108 21:28:39.609651  240774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:28:39.790049  240774 request.go:629] Waited for 180.331175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 21:28:39.790120  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 21:28:39.790129  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.790137  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.790143  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.793218  240774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:28:39.793245  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.793253  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.793260  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.793265  240774 round_trippers.go:580]     Audit-Id: 9d9f1b99-aacd-45f5-9ad1-c5d5104c6c6a
	I0108 21:28:39.793272  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.793297  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.793310  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.793928  240774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"441","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0108 21:28:39.795655  240774 system_pods.go:59] 8 kube-system pods found
	I0108 21:28:39.795680  240774 system_pods.go:61] "coredns-5dd5756b68-72pdc" [e1a23fde-a3c8-4acb-b244-41f8ddfe2645] Running
	I0108 21:28:39.795686  240774 system_pods.go:61] "etcd-multinode-379549" [15613f97-4ce1-40e7-9477-83067c6da0d5] Running
	I0108 21:28:39.795699  240774 system_pods.go:61] "kindnet-982tk" [382d7096-e18b-45fc-a98a-df05c243ffeb] Running
	I0108 21:28:39.795704  240774 system_pods.go:61] "kube-apiserver-multinode-379549" [904d4735-a5db-4779-a543-37219944e6ad] Running
	I0108 21:28:39.795715  240774 system_pods.go:61] "kube-controller-manager-multinode-379549" [c1f82f54-8b68-4da1-bfd1-f70984dc7718] Running
	I0108 21:28:39.795724  240774 system_pods.go:61] "kube-proxy-zqbsv" [44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9] Running
	I0108 21:28:39.795729  240774 system_pods.go:61] "kube-scheduler-multinode-379549" [8c5d7d7d-f49a-427d-b2c2-72db08b9934f] Running
	I0108 21:28:39.795734  240774 system_pods.go:61] "storage-provisioner" [c2b077b4-019f-4d60-950e-5f924b4cacb4] Running
	I0108 21:28:39.795740  240774 system_pods.go:74] duration metric: took 186.081133ms to wait for pod list to return data ...
	I0108 21:28:39.795749  240774 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:28:39.990212  240774 request.go:629] Waited for 194.346472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:28:39.990270  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:28:39.990275  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:39.990283  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:39.990291  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:39.992620  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:39.992641  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:39.992649  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:39.992655  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:39.992660  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:39.992666  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:39.992672  240774 round_trippers.go:580]     Content-Length: 261
	I0108 21:28:39.992677  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:39 GMT
	I0108 21:28:39.992682  240774 round_trippers.go:580]     Audit-Id: a7007f15-2cb5-4ab9-af1b-9f0c1b6a2e31
	I0108 21:28:39.992710  240774 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a81d71ec-1c54-459b-8f79-a82d1be0f25e","resourceVersion":"340","creationTimestamp":"2024-01-08T21:28:05Z"}}]}
	I0108 21:28:39.992906  240774 default_sa.go:45] found service account: "default"
	I0108 21:28:39.992923  240774 default_sa.go:55] duration metric: took 197.167846ms for default service account to be created ...
	I0108 21:28:39.992932  240774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:28:40.189312  240774 request.go:629] Waited for 196.30572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 21:28:40.189396  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 21:28:40.189408  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:40.189420  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:40.189430  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:40.192490  240774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:28:40.192516  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:40.192525  240774 round_trippers.go:580]     Audit-Id: a2d5c381-df69-4989-9371-bb65b3d9e7e2
	I0108 21:28:40.192532  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:40.192539  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:40.192546  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:40.192553  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:40.192565  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:40 GMT
	I0108 21:28:40.192952  240774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"441","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0108 21:28:40.194761  240774 system_pods.go:86] 8 kube-system pods found
	I0108 21:28:40.194784  240774 system_pods.go:89] "coredns-5dd5756b68-72pdc" [e1a23fde-a3c8-4acb-b244-41f8ddfe2645] Running
	I0108 21:28:40.194790  240774 system_pods.go:89] "etcd-multinode-379549" [15613f97-4ce1-40e7-9477-83067c6da0d5] Running
	I0108 21:28:40.194796  240774 system_pods.go:89] "kindnet-982tk" [382d7096-e18b-45fc-a98a-df05c243ffeb] Running
	I0108 21:28:40.194801  240774 system_pods.go:89] "kube-apiserver-multinode-379549" [904d4735-a5db-4779-a543-37219944e6ad] Running
	I0108 21:28:40.194809  240774 system_pods.go:89] "kube-controller-manager-multinode-379549" [c1f82f54-8b68-4da1-bfd1-f70984dc7718] Running
	I0108 21:28:40.194815  240774 system_pods.go:89] "kube-proxy-zqbsv" [44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9] Running
	I0108 21:28:40.194822  240774 system_pods.go:89] "kube-scheduler-multinode-379549" [8c5d7d7d-f49a-427d-b2c2-72db08b9934f] Running
	I0108 21:28:40.194826  240774 system_pods.go:89] "storage-provisioner" [c2b077b4-019f-4d60-950e-5f924b4cacb4] Running
	I0108 21:28:40.194831  240774 system_pods.go:126] duration metric: took 201.892575ms to wait for k8s-apps to be running ...
	I0108 21:28:40.194840  240774 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:28:40.194883  240774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:28:40.205545  240774 system_svc.go:56] duration metric: took 10.69684ms WaitForService to wait for kubelet.
	I0108 21:28:40.205570  240774 kubeadm.go:581] duration metric: took 33.686436246s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:28:40.205587  240774 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:28:40.390016  240774 request.go:629] Waited for 184.331729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 21:28:40.390092  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 21:28:40.390100  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:40.390108  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:40.390117  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:40.392477  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:40.392499  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:40.392510  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:40.392520  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:40 GMT
	I0108 21:28:40.392528  240774 round_trippers.go:580]     Audit-Id: 223b017d-6d22-4746-a7f7-610ba9dffe08
	I0108 21:28:40.392540  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:40.392552  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:40.392564  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:40.392684  240774 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0108 21:28:40.393110  240774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:28:40.393134  240774 node_conditions.go:123] node cpu capacity is 8
	I0108 21:28:40.393144  240774 node_conditions.go:105] duration metric: took 187.552481ms to run NodePressure ...
	I0108 21:28:40.393159  240774 start.go:228] waiting for startup goroutines ...
	I0108 21:28:40.393165  240774 start.go:233] waiting for cluster config update ...
	I0108 21:28:40.393177  240774 start.go:242] writing updated cluster config ...
	I0108 21:28:40.395745  240774 out.go:177] 
	I0108 21:28:40.397256  240774 config.go:182] Loaded profile config "multinode-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:28:40.397325  240774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/config.json ...
	I0108 21:28:40.399297  240774 out.go:177] * Starting worker node multinode-379549-m02 in cluster multinode-379549
	I0108 21:28:40.401122  240774 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 21:28:40.402419  240774 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 21:28:40.403807  240774 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:28:40.403840  240774 cache.go:56] Caching tarball of preloaded images
	I0108 21:28:40.403902  240774 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 21:28:40.403950  240774 preload.go:174] Found /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:28:40.403959  240774 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:28:40.404040  240774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/config.json ...
	I0108 21:28:40.420074  240774 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon, skipping pull
	I0108 21:28:40.420098  240774 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in daemon, skipping load
	I0108 21:28:40.420121  240774 cache.go:194] Successfully downloaded all kic artifacts
	I0108 21:28:40.420161  240774 start.go:365] acquiring machines lock for multinode-379549-m02: {Name:mkaec927bfd28bf2f3494299b4e8b593e07e1a02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:28:40.420278  240774 start.go:369] acquired machines lock for "multinode-379549-m02" in 94.995µs
	I0108 21:28:40.420308  240774 start.go:93] Provisioning new machine with config: &{Name:multinode-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-379549 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:28:40.420402  240774 start.go:125] createHost starting for "m02" (driver="docker")
	I0108 21:28:40.422462  240774 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 21:28:40.422595  240774 start.go:159] libmachine.API.Create for "multinode-379549" (driver="docker")
	I0108 21:28:40.422620  240774 client.go:168] LocalClient.Create starting
	I0108 21:28:40.422696  240774 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem
	I0108 21:28:40.422748  240774 main.go:141] libmachine: Decoding PEM data...
	I0108 21:28:40.422772  240774 main.go:141] libmachine: Parsing certificate...
	I0108 21:28:40.422853  240774 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem
	I0108 21:28:40.422886  240774 main.go:141] libmachine: Decoding PEM data...
	I0108 21:28:40.422904  240774 main.go:141] libmachine: Parsing certificate...
	I0108 21:28:40.423128  240774 cli_runner.go:164] Run: docker network inspect multinode-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:28:40.438511  240774 network_create.go:77] Found existing network {name:multinode-379549 subnet:0xc002ed7470 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0108 21:28:40.438550  240774 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-379549-m02" container
	I0108 21:28:40.438613  240774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:28:40.453249  240774 cli_runner.go:164] Run: docker volume create multinode-379549-m02 --label name.minikube.sigs.k8s.io=multinode-379549-m02 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:28:40.469061  240774 oci.go:103] Successfully created a docker volume multinode-379549-m02
	I0108 21:28:40.469143  240774 cli_runner.go:164] Run: docker run --rm --name multinode-379549-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-379549-m02 --entrypoint /usr/bin/test -v multinode-379549-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib
	I0108 21:28:40.981423  240774 oci.go:107] Successfully prepared a docker volume multinode-379549-m02
	I0108 21:28:40.981493  240774 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:28:40.981518  240774 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 21:28:40.981581  240774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-379549-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 21:28:46.061006  240774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-379549-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir: (5.079373682s)
	I0108 21:28:46.061043  240774 kic.go:203] duration metric: took 5.079522 seconds to extract preloaded images to volume
	W0108 21:28:46.061159  240774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:28:46.061248  240774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:28:46.112301  240774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-379549-m02 --name multinode-379549-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-379549-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-379549-m02 --network multinode-379549 --ip 192.168.58.3 --volume multinode-379549-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 21:28:46.409407  240774 cli_runner.go:164] Run: docker container inspect multinode-379549-m02 --format={{.State.Running}}
	I0108 21:28:46.426624  240774 cli_runner.go:164] Run: docker container inspect multinode-379549-m02 --format={{.State.Status}}
	I0108 21:28:46.443735  240774 cli_runner.go:164] Run: docker exec multinode-379549-m02 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:28:46.505952  240774 oci.go:144] the created container "multinode-379549-m02" has a running status.
	I0108 21:28:46.505984  240774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa...
	I0108 21:28:46.577405  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 21:28:46.577455  240774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:28:46.600638  240774 cli_runner.go:164] Run: docker container inspect multinode-379549-m02 --format={{.State.Status}}
	I0108 21:28:46.618127  240774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:28:46.618149  240774 kic_runner.go:114] Args: [docker exec --privileged multinode-379549-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:28:46.694405  240774 cli_runner.go:164] Run: docker container inspect multinode-379549-m02 --format={{.State.Status}}
	I0108 21:28:46.711908  240774 machine.go:88] provisioning docker machine ...
	I0108 21:28:46.711948  240774 ubuntu.go:169] provisioning hostname "multinode-379549-m02"
	I0108 21:28:46.712018  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:28:46.733809  240774 main.go:141] libmachine: Using SSH client type: native
	I0108 21:28:46.734150  240774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0108 21:28:46.734165  240774 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-379549-m02 && echo "multinode-379549-m02" | sudo tee /etc/hostname
	I0108 21:28:46.734774  240774 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41568->127.0.0.1:32852: read: connection reset by peer
	I0108 21:28:49.887822  240774 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-379549-m02
	
	I0108 21:28:49.887908  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:28:49.904225  240774 main.go:141] libmachine: Using SSH client type: native
	I0108 21:28:49.904596  240774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0108 21:28:49.904621  240774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-379549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-379549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-379549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:28:50.041324  240774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:28:50.041363  240774 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-150013/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-150013/.minikube}
	I0108 21:28:50.041379  240774 ubuntu.go:177] setting up certificates
	I0108 21:28:50.041393  240774 provision.go:83] configureAuth start
	I0108 21:28:50.041461  240774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-379549-m02
	I0108 21:28:50.057533  240774 provision.go:138] copyHostCerts
	I0108 21:28:50.057573  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem
	I0108 21:28:50.057600  240774 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem, removing ...
	I0108 21:28:50.057609  240774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem
	I0108 21:28:50.057666  240774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem (1123 bytes)
	I0108 21:28:50.057745  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem
	I0108 21:28:50.057763  240774 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem, removing ...
	I0108 21:28:50.057767  240774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem
	I0108 21:28:50.057789  240774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem (1675 bytes)
	I0108 21:28:50.057849  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem
	I0108 21:28:50.057905  240774 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem, removing ...
	I0108 21:28:50.057914  240774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem
	I0108 21:28:50.057938  240774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem (1078 bytes)
	I0108 21:28:50.057984  240774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem org=jenkins.multinode-379549-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-379549-m02]
	I0108 21:28:50.199808  240774 provision.go:172] copyRemoteCerts
	I0108 21:28:50.199882  240774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:28:50.199920  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:28:50.215704  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa Username:docker}
	I0108 21:28:50.313751  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:28:50.313814  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:28:50.335251  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:28:50.335307  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:28:50.355674  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:28:50.355731  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:28:50.375982  240774 provision.go:86] duration metric: configureAuth took 334.576787ms
	I0108 21:28:50.376012  240774 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:28:50.376166  240774 config.go:182] Loaded profile config "multinode-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:28:50.376267  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:28:50.392369  240774 main.go:141] libmachine: Using SSH client type: native
	I0108 21:28:50.392690  240774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0108 21:28:50.392705  240774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:28:50.612702  240774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:28:50.612729  240774 machine.go:91] provisioned docker machine in 3.900797419s
	I0108 21:28:50.612738  240774 client.go:171] LocalClient.Create took 10.190110065s
	I0108 21:28:50.612766  240774 start.go:167] duration metric: libmachine.API.Create for "multinode-379549" took 10.190166071s
	I0108 21:28:50.612772  240774 start.go:300] post-start starting for "multinode-379549-m02" (driver="docker")
	I0108 21:28:50.612782  240774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:28:50.612890  240774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:28:50.612927  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:28:50.629757  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa Username:docker}
	I0108 21:28:50.726027  240774 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:28:50.728898  240774 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 21:28:50.728930  240774 command_runner.go:130] > NAME="Ubuntu"
	I0108 21:28:50.728939  240774 command_runner.go:130] > VERSION_ID="22.04"
	I0108 21:28:50.728947  240774 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 21:28:50.728954  240774 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 21:28:50.728959  240774 command_runner.go:130] > ID=ubuntu
	I0108 21:28:50.728965  240774 command_runner.go:130] > ID_LIKE=debian
	I0108 21:28:50.728973  240774 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 21:28:50.728984  240774 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 21:28:50.728996  240774 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 21:28:50.729014  240774 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 21:28:50.729024  240774 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 21:28:50.729096  240774 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:28:50.729133  240774 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:28:50.729152  240774 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:28:50.729165  240774 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 21:28:50.729179  240774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/addons for local assets ...
	I0108 21:28:50.729238  240774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/files for local assets ...
	I0108 21:28:50.729336  240774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> 1566482.pem in /etc/ssl/certs
	I0108 21:28:50.729353  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> /etc/ssl/certs/1566482.pem
	I0108 21:28:50.729476  240774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:28:50.736834  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem --> /etc/ssl/certs/1566482.pem (1708 bytes)
	I0108 21:28:50.757866  240774 start.go:303] post-start completed in 145.077284ms
	I0108 21:28:50.758258  240774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-379549-m02
	I0108 21:28:50.774446  240774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/config.json ...
	I0108 21:28:50.774737  240774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:28:50.774791  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:28:50.791116  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa Username:docker}
	I0108 21:28:50.881885  240774 command_runner.go:130] > 36%!
	(MISSING)I0108 21:28:50.881961  240774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:28:50.885655  240774 command_runner.go:130] > 189G
	I0108 21:28:50.885911  240774 start.go:128] duration metric: createHost completed in 10.465492362s
	I0108 21:28:50.885930  240774 start.go:83] releasing machines lock for "multinode-379549-m02", held for 10.465639454s
	I0108 21:28:50.886012  240774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-379549-m02
	I0108 21:28:50.904432  240774 out.go:177] * Found network options:
	I0108 21:28:50.906119  240774 out.go:177]   - NO_PROXY=192.168.58.2
	W0108 21:28:50.907433  240774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:28:50.907497  240774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:28:50.907575  240774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:28:50.907621  240774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:28:50.907679  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:28:50.907623  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:28:50.924612  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa Username:docker}
	I0108 21:28:50.925247  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa Username:docker}
	I0108 21:28:51.150400  240774 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:28:51.150495  240774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:28:51.154580  240774 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 21:28:51.154609  240774 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 21:28:51.154620  240774 command_runner.go:130] > Device: b0h/176d	Inode: 556131      Links: 1
	I0108 21:28:51.154639  240774 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:28:51.154650  240774 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 21:28:51.154662  240774 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 21:28:51.154673  240774 command_runner.go:130] > Change: 2024-01-08 21:09:21.185659885 +0000
	I0108 21:28:51.154682  240774 command_runner.go:130] >  Birth: 2024-01-08 21:09:21.185659885 +0000
	I0108 21:28:51.154747  240774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:28:51.171939  240774 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 21:28:51.172018  240774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:28:51.197327  240774 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 21:28:51.197379  240774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 21:28:51.197388  240774 start.go:475] detecting cgroup driver to use...
	I0108 21:28:51.197417  240774 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 21:28:51.197532  240774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:28:51.210855  240774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:28:51.222234  240774 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:28:51.222294  240774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:28:51.234004  240774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:28:51.246140  240774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:28:51.327903  240774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:28:51.406678  240774 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 21:28:51.406710  240774 docker.go:219] disabling docker service ...
	I0108 21:28:51.406759  240774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:28:51.423443  240774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:28:51.433769  240774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:28:51.444212  240774 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 21:28:51.509429  240774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:28:51.593334  240774 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 21:28:51.593416  240774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:28:51.603497  240774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:28:51.617069  240774 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 21:28:51.617946  240774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:28:51.618016  240774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:28:51.626655  240774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:28:51.626708  240774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:28:51.635184  240774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:28:51.643838  240774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:28:51.652557  240774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:28:51.660594  240774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:28:51.667497  240774 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:28:51.668185  240774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:28:51.675522  240774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:28:51.749702  240774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:28:51.846849  240774 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:28:51.846930  240774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:28:51.850252  240774 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 21:28:51.850275  240774 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:28:51.850288  240774 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I0108 21:28:51.850298  240774 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:28:51.850311  240774 command_runner.go:130] > Access: 2024-01-08 21:28:51.833807015 +0000
	I0108 21:28:51.850329  240774 command_runner.go:130] > Modify: 2024-01-08 21:28:51.833807015 +0000
	I0108 21:28:51.850338  240774 command_runner.go:130] > Change: 2024-01-08 21:28:51.833807015 +0000
	I0108 21:28:51.850344  240774 command_runner.go:130] >  Birth: -
	I0108 21:28:51.850399  240774 start.go:543] Will wait 60s for crictl version
	I0108 21:28:51.850446  240774 ssh_runner.go:195] Run: which crictl
	I0108 21:28:51.853249  240774 command_runner.go:130] > /usr/bin/crictl
	I0108 21:28:51.853352  240774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:28:51.883520  240774 command_runner.go:130] > Version:  0.1.0
	I0108 21:28:51.883546  240774 command_runner.go:130] > RuntimeName:  cri-o
	I0108 21:28:51.883554  240774 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 21:28:51.883562  240774 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:28:51.885760  240774 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 21:28:51.885838  240774 ssh_runner.go:195] Run: crio --version
	I0108 21:28:51.917677  240774 command_runner.go:130] > crio version 1.24.6
	I0108 21:28:51.917700  240774 command_runner.go:130] > Version:          1.24.6
	I0108 21:28:51.917707  240774 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 21:28:51.917711  240774 command_runner.go:130] > GitTreeState:     clean
	I0108 21:28:51.917720  240774 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 21:28:51.917724  240774 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 21:28:51.917733  240774 command_runner.go:130] > Compiler:         gc
	I0108 21:28:51.917737  240774 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:28:51.917744  240774 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:28:51.917752  240774 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:28:51.917756  240774 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:28:51.917760  240774 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:28:51.919435  240774 ssh_runner.go:195] Run: crio --version
	I0108 21:28:51.950675  240774 command_runner.go:130] > crio version 1.24.6
	I0108 21:28:51.950702  240774 command_runner.go:130] > Version:          1.24.6
	I0108 21:28:51.950711  240774 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 21:28:51.950715  240774 command_runner.go:130] > GitTreeState:     clean
	I0108 21:28:51.950724  240774 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 21:28:51.950728  240774 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 21:28:51.950732  240774 command_runner.go:130] > Compiler:         gc
	I0108 21:28:51.950736  240774 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:28:51.950768  240774 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:28:51.950782  240774 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:28:51.950786  240774 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:28:51.950790  240774 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:28:51.955582  240774 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 21:28:51.956873  240774 out.go:177]   - env NO_PROXY=192.168.58.2
	I0108 21:28:51.958148  240774 cli_runner.go:164] Run: docker network inspect multinode-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:28:51.974234  240774 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 21:28:51.977624  240774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:28:51.987687  240774 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549 for IP: 192.168.58.3
	I0108 21:28:51.987714  240774 certs.go:190] acquiring lock for shared ca certs: {Name:mk66e763e1c1c88a577c7e7f60df668cab98f63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:28:51.987879  240774 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key
	I0108 21:28:51.987936  240774 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key
	I0108 21:28:51.987953  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:28:51.987971  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:28:51.987987  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:28:51.988005  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:28:51.988073  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem (1338 bytes)
	W0108 21:28:51.988118  240774 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648_empty.pem, impossibly tiny 0 bytes
	I0108 21:28:51.988134  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem (1679 bytes)
	I0108 21:28:51.988171  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:28:51.988205  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:28:51.988241  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem (1675 bytes)
	I0108 21:28:51.988295  240774 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem (1708 bytes)
	I0108 21:28:51.988332  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:28:51.988352  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem -> /usr/share/ca-certificates/156648.pem
	I0108 21:28:51.988372  240774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> /usr/share/ca-certificates/1566482.pem
	I0108 21:28:51.988738  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:28:52.009956  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:28:52.031002  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:28:52.052088  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:28:52.072691  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:28:52.093920  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/156648.pem --> /usr/share/ca-certificates/156648.pem (1338 bytes)
	I0108 21:28:52.114585  240774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem --> /usr/share/ca-certificates/1566482.pem (1708 bytes)
	I0108 21:28:52.135599  240774 ssh_runner.go:195] Run: openssl version
	I0108 21:28:52.140387  240774 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 21:28:52.140466  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:28:52.148887  240774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:28:52.151943  240774 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:28:52.151997  240774 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:09 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:28:52.152036  240774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:28:52.157992  240774 command_runner.go:130] > b5213941
	I0108 21:28:52.158245  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:28:52.166344  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156648.pem && ln -fs /usr/share/ca-certificates/156648.pem /etc/ssl/certs/156648.pem"
	I0108 21:28:52.174497  240774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156648.pem
	I0108 21:28:52.177436  240774 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 21:15 /usr/share/ca-certificates/156648.pem
	I0108 21:28:52.177514  240774 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:15 /usr/share/ca-certificates/156648.pem
	I0108 21:28:52.177547  240774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156648.pem
	I0108 21:28:52.183319  240774 command_runner.go:130] > 51391683
	I0108 21:28:52.183588  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156648.pem /etc/ssl/certs/51391683.0"
	I0108 21:28:52.191641  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566482.pem && ln -fs /usr/share/ca-certificates/1566482.pem /etc/ssl/certs/1566482.pem"
	I0108 21:28:52.199551  240774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566482.pem
	I0108 21:28:52.202581  240774 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 21:15 /usr/share/ca-certificates/1566482.pem
	I0108 21:28:52.202610  240774 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:15 /usr/share/ca-certificates/1566482.pem
	I0108 21:28:52.202648  240774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566482.pem
	I0108 21:28:52.208392  240774 command_runner.go:130] > 3ec20f2e
	I0108 21:28:52.208647  240774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566482.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:28:52.216896  240774 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:28:52.219869  240774 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:28:52.219920  240774 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:28:52.220006  240774 ssh_runner.go:195] Run: crio config
	I0108 21:28:52.257404  240774 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 21:28:52.257436  240774 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 21:28:52.257464  240774 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 21:28:52.257471  240774 command_runner.go:130] > #
	I0108 21:28:52.257483  240774 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 21:28:52.257492  240774 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 21:28:52.257503  240774 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 21:28:52.257521  240774 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 21:28:52.257531  240774 command_runner.go:130] > # reload'.
	I0108 21:28:52.257542  240774 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 21:28:52.257556  240774 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 21:28:52.257568  240774 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 21:28:52.257582  240774 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 21:28:52.257592  240774 command_runner.go:130] > [crio]
	I0108 21:28:52.257602  240774 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 21:28:52.257614  240774 command_runner.go:130] > # containers images, in this directory.
	I0108 21:28:52.257629  240774 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 21:28:52.257643  240774 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 21:28:52.257654  240774 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 21:28:52.257669  240774 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 21:28:52.257680  240774 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 21:28:52.257690  240774 command_runner.go:130] > # storage_driver = "vfs"
	I0108 21:28:52.257701  240774 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 21:28:52.257714  240774 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 21:28:52.257729  240774 command_runner.go:130] > # storage_option = [
	I0108 21:28:52.257738  240774 command_runner.go:130] > # ]
	I0108 21:28:52.257749  240774 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 21:28:52.257766  240774 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 21:28:52.257780  240774 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 21:28:52.257789  240774 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 21:28:52.257798  240774 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 21:28:52.257806  240774 command_runner.go:130] > # always happen on a node reboot
	I0108 21:28:52.257813  240774 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 21:28:52.257822  240774 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 21:28:52.257835  240774 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 21:28:52.257849  240774 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 21:28:52.257862  240774 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 21:28:52.257877  240774 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 21:28:52.257890  240774 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 21:28:52.257901  240774 command_runner.go:130] > # internal_wipe = true
	I0108 21:28:52.257912  240774 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 21:28:52.257922  240774 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 21:28:52.257931  240774 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 21:28:52.257940  240774 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 21:28:52.257950  240774 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 21:28:52.257955  240774 command_runner.go:130] > [crio.api]
	I0108 21:28:52.257961  240774 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 21:28:52.257965  240774 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 21:28:52.257971  240774 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 21:28:52.257975  240774 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 21:28:52.257982  240774 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 21:28:52.257986  240774 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 21:28:52.257990  240774 command_runner.go:130] > # stream_port = "0"
	I0108 21:28:52.257996  240774 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 21:28:52.258000  240774 command_runner.go:130] > # stream_enable_tls = false
	I0108 21:28:52.258006  240774 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 21:28:52.258010  240774 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 21:28:52.258016  240774 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 21:28:52.258024  240774 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 21:28:52.258029  240774 command_runner.go:130] > # minutes.
	I0108 21:28:52.258037  240774 command_runner.go:130] > # stream_tls_cert = ""
	I0108 21:28:52.258047  240774 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 21:28:52.258058  240774 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 21:28:52.258065  240774 command_runner.go:130] > # stream_tls_key = ""
	I0108 21:28:52.258076  240774 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 21:28:52.258085  240774 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 21:28:52.258094  240774 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 21:28:52.258101  240774 command_runner.go:130] > # stream_tls_ca = ""
	I0108 21:28:52.258114  240774 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:28:52.258123  240774 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 21:28:52.258135  240774 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:28:52.258148  240774 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 21:28:52.258166  240774 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 21:28:52.258176  240774 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 21:28:52.258182  240774 command_runner.go:130] > [crio.runtime]
	I0108 21:28:52.258189  240774 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 21:28:52.258202  240774 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 21:28:52.258209  240774 command_runner.go:130] > # "nofile=1024:2048"
	I0108 21:28:52.258220  240774 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 21:28:52.258227  240774 command_runner.go:130] > # default_ulimits = [
	I0108 21:28:52.258233  240774 command_runner.go:130] > # ]
	I0108 21:28:52.258243  240774 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 21:28:52.258250  240774 command_runner.go:130] > # no_pivot = false
	I0108 21:28:52.258259  240774 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 21:28:52.258269  240774 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 21:28:52.258275  240774 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 21:28:52.258281  240774 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 21:28:52.258289  240774 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 21:28:52.258301  240774 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:28:52.258307  240774 command_runner.go:130] > # conmon = ""
	I0108 21:28:52.258321  240774 command_runner.go:130] > # Cgroup setting for conmon
	I0108 21:28:52.258331  240774 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 21:28:52.258338  240774 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 21:28:52.258348  240774 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 21:28:52.258356  240774 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 21:28:52.258367  240774 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:28:52.258376  240774 command_runner.go:130] > # conmon_env = [
	I0108 21:28:52.258382  240774 command_runner.go:130] > # ]
	I0108 21:28:52.258392  240774 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 21:28:52.258401  240774 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 21:28:52.258411  240774 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 21:28:52.258417  240774 command_runner.go:130] > # default_env = [
	I0108 21:28:52.258423  240774 command_runner.go:130] > # ]
	I0108 21:28:52.258432  240774 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 21:28:52.258439  240774 command_runner.go:130] > # selinux = false
	I0108 21:28:52.258447  240774 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 21:28:52.258456  240774 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 21:28:52.258466  240774 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 21:28:52.258473  240774 command_runner.go:130] > # seccomp_profile = ""
	I0108 21:28:52.258482  240774 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 21:28:52.258492  240774 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 21:28:52.258503  240774 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 21:28:52.258511  240774 command_runner.go:130] > # which might increase security.
	I0108 21:28:52.258519  240774 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 21:28:52.258530  240774 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 21:28:52.258542  240774 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 21:28:52.258551  240774 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 21:28:52.258561  240774 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 21:28:52.258569  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:28:52.258577  240774 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 21:28:52.258586  240774 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 21:28:52.258594  240774 command_runner.go:130] > # the cgroup blockio controller.
	I0108 21:28:52.258600  240774 command_runner.go:130] > # blockio_config_file = ""
	I0108 21:28:52.258609  240774 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 21:28:52.258616  240774 command_runner.go:130] > # irqbalance daemon.
	I0108 21:28:52.258625  240774 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 21:28:52.258635  240774 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 21:28:52.258643  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:28:52.258650  240774 command_runner.go:130] > # rdt_config_file = ""
	I0108 21:28:52.258659  240774 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 21:28:52.258666  240774 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 21:28:52.258676  240774 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 21:28:52.258684  240774 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 21:28:52.258695  240774 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 21:28:52.258705  240774 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 21:28:52.258712  240774 command_runner.go:130] > # will be added.
	I0108 21:28:52.258719  240774 command_runner.go:130] > # default_capabilities = [
	I0108 21:28:52.258726  240774 command_runner.go:130] > # 	"CHOWN",
	I0108 21:28:52.258733  240774 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 21:28:52.258739  240774 command_runner.go:130] > # 	"FSETID",
	I0108 21:28:52.258746  240774 command_runner.go:130] > # 	"FOWNER",
	I0108 21:28:52.258752  240774 command_runner.go:130] > # 	"SETGID",
	I0108 21:28:52.258758  240774 command_runner.go:130] > # 	"SETUID",
	I0108 21:28:52.258764  240774 command_runner.go:130] > # 	"SETPCAP",
	I0108 21:28:52.258771  240774 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 21:28:52.258777  240774 command_runner.go:130] > # 	"KILL",
	I0108 21:28:52.258781  240774 command_runner.go:130] > # ]
	I0108 21:28:52.258790  240774 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 21:28:52.258801  240774 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 21:28:52.258810  240774 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 21:28:52.258820  240774 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 21:28:52.258828  240774 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:28:52.258835  240774 command_runner.go:130] > # default_sysctls = [
	I0108 21:28:52.258840  240774 command_runner.go:130] > # ]
	I0108 21:28:52.258848  240774 command_runner.go:130] > # List of devices on the host that a
	I0108 21:28:52.258859  240774 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 21:28:52.258869  240774 command_runner.go:130] > # allowed_devices = [
	I0108 21:28:52.258875  240774 command_runner.go:130] > # 	"/dev/fuse",
	I0108 21:28:52.258881  240774 command_runner.go:130] > # ]
	I0108 21:28:52.258889  240774 command_runner.go:130] > # List of additional devices. specified as
	I0108 21:28:52.258925  240774 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 21:28:52.258934  240774 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 21:28:52.258944  240774 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:28:52.258950  240774 command_runner.go:130] > # additional_devices = [
	I0108 21:28:52.258953  240774 command_runner.go:130] > # ]
	I0108 21:28:52.258960  240774 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 21:28:52.258967  240774 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 21:28:52.258974  240774 command_runner.go:130] > # 	"/etc/cdi",
	I0108 21:28:52.258982  240774 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 21:28:52.258988  240774 command_runner.go:130] > # ]
	I0108 21:28:52.258998  240774 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 21:28:52.259009  240774 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 21:28:52.259016  240774 command_runner.go:130] > # Defaults to false.
	I0108 21:28:52.259024  240774 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 21:28:52.259034  240774 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 21:28:52.259040  240774 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 21:28:52.259044  240774 command_runner.go:130] > # hooks_dir = [
	I0108 21:28:52.259052  240774 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 21:28:52.259058  240774 command_runner.go:130] > # ]
	I0108 21:28:52.259070  240774 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 21:28:52.259080  240774 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 21:28:52.259089  240774 command_runner.go:130] > # its default mounts from the following two files:
	I0108 21:28:52.259094  240774 command_runner.go:130] > #
	I0108 21:28:52.259104  240774 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 21:28:52.259114  240774 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 21:28:52.259123  240774 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 21:28:52.259126  240774 command_runner.go:130] > #
	I0108 21:28:52.259134  240774 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 21:28:52.259144  240774 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 21:28:52.259156  240774 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 21:28:52.259164  240774 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 21:28:52.259169  240774 command_runner.go:130] > #
	I0108 21:28:52.259177  240774 command_runner.go:130] > # default_mounts_file = ""
	I0108 21:28:52.259185  240774 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 21:28:52.259196  240774 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 21:28:52.259203  240774 command_runner.go:130] > # pids_limit = 0
	I0108 21:28:52.259211  240774 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 21:28:52.259218  240774 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 21:28:52.259228  240774 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 21:28:52.259241  240774 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 21:28:52.259248  240774 command_runner.go:130] > # log_size_max = -1
	I0108 21:28:52.259259  240774 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 21:28:52.259266  240774 command_runner.go:130] > # log_to_journald = false
	I0108 21:28:52.259276  240774 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 21:28:52.259286  240774 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 21:28:52.259293  240774 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 21:28:52.259298  240774 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 21:28:52.259304  240774 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 21:28:52.259318  240774 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 21:28:52.259327  240774 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 21:28:52.259334  240774 command_runner.go:130] > # read_only = false
	I0108 21:28:52.259344  240774 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 21:28:52.259354  240774 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 21:28:52.259361  240774 command_runner.go:130] > # live configuration reload.
	I0108 21:28:52.259368  240774 command_runner.go:130] > # log_level = "info"
	I0108 21:28:52.259377  240774 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 21:28:52.259382  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:28:52.259386  240774 command_runner.go:130] > # log_filter = ""
	I0108 21:28:52.259396  240774 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 21:28:52.259406  240774 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 21:28:52.259413  240774 command_runner.go:130] > # separated by comma.
	I0108 21:28:52.259420  240774 command_runner.go:130] > # uid_mappings = ""
	I0108 21:28:52.259430  240774 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 21:28:52.259440  240774 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 21:28:52.259447  240774 command_runner.go:130] > # separated by comma.
	I0108 21:28:52.259454  240774 command_runner.go:130] > # gid_mappings = ""
	I0108 21:28:52.259463  240774 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 21:28:52.259470  240774 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:28:52.259477  240774 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:28:52.259485  240774 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 21:28:52.259496  240774 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 21:28:52.259505  240774 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:28:52.259515  240774 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:28:52.259525  240774 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 21:28:52.259535  240774 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 21:28:52.259544  240774 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 21:28:52.259551  240774 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 21:28:52.259555  240774 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 21:28:52.259563  240774 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 21:28:52.259575  240774 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 21:28:52.259584  240774 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 21:28:52.259592  240774 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 21:28:52.259599  240774 command_runner.go:130] > # drop_infra_ctr = true
	I0108 21:28:52.259610  240774 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 21:28:52.259619  240774 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 21:28:52.259630  240774 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 21:28:52.259635  240774 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 21:28:52.259643  240774 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 21:28:52.259651  240774 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 21:28:52.259659  240774 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 21:28:52.259670  240774 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 21:28:52.259677  240774 command_runner.go:130] > # pinns_path = ""
	I0108 21:28:52.259687  240774 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 21:28:52.259698  240774 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 21:28:52.259708  240774 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 21:28:52.259715  240774 command_runner.go:130] > # default_runtime = "runc"
	I0108 21:28:52.259721  240774 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 21:28:52.259728  240774 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 21:28:52.259748  240774 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 21:28:52.259758  240774 command_runner.go:130] > # creation as a file is not desired either.
	I0108 21:28:52.259771  240774 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 21:28:52.259779  240774 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 21:28:52.259787  240774 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 21:28:52.259793  240774 command_runner.go:130] > # ]
	I0108 21:28:52.259803  240774 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 21:28:52.259812  240774 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 21:28:52.259822  240774 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 21:28:52.259834  240774 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 21:28:52.259840  240774 command_runner.go:130] > #
	I0108 21:28:52.259848  240774 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 21:28:52.259857  240774 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 21:28:52.259864  240774 command_runner.go:130] > #  runtime_type = "oci"
	I0108 21:28:52.259872  240774 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 21:28:52.259880  240774 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 21:28:52.259887  240774 command_runner.go:130] > #  allowed_annotations = []
	I0108 21:28:52.259893  240774 command_runner.go:130] > # Where:
	I0108 21:28:52.259902  240774 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 21:28:52.259913  240774 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 21:28:52.259923  240774 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 21:28:52.259933  240774 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 21:28:52.259939  240774 command_runner.go:130] > #   in $PATH.
	I0108 21:28:52.259950  240774 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 21:28:52.259958  240774 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 21:28:52.259968  240774 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 21:28:52.259975  240774 command_runner.go:130] > #   state.
	I0108 21:28:52.259984  240774 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 21:28:52.259989  240774 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 21:28:52.259995  240774 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 21:28:52.260000  240774 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 21:28:52.260008  240774 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 21:28:52.260014  240774 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 21:28:52.260019  240774 command_runner.go:130] > #   The currently recognized values are:
	I0108 21:28:52.260024  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 21:28:52.260031  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 21:28:52.260037  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 21:28:52.260042  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 21:28:52.260050  240774 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 21:28:52.260056  240774 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 21:28:52.260062  240774 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 21:28:52.260068  240774 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 21:28:52.260073  240774 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 21:28:52.260077  240774 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 21:28:52.260083  240774 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 21:28:52.260087  240774 command_runner.go:130] > runtime_type = "oci"
	I0108 21:28:52.260091  240774 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 21:28:52.260095  240774 command_runner.go:130] > runtime_config_path = ""
	I0108 21:28:52.260100  240774 command_runner.go:130] > monitor_path = ""
	I0108 21:28:52.260103  240774 command_runner.go:130] > monitor_cgroup = ""
	I0108 21:28:52.260107  240774 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 21:28:52.260135  240774 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 21:28:52.260138  240774 command_runner.go:130] > # running containers
	I0108 21:28:52.260142  240774 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 21:28:52.260149  240774 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 21:28:52.260155  240774 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 21:28:52.260161  240774 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 21:28:52.260165  240774 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 21:28:52.260170  240774 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 21:28:52.260174  240774 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 21:28:52.260179  240774 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 21:28:52.260183  240774 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 21:28:52.260187  240774 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 21:28:52.260193  240774 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 21:28:52.260198  240774 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 21:28:52.260204  240774 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 21:28:52.260211  240774 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 21:28:52.260218  240774 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 21:28:52.260223  240774 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 21:28:52.260233  240774 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 21:28:52.260240  240774 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 21:28:52.260245  240774 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 21:28:52.260252  240774 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 21:28:52.260255  240774 command_runner.go:130] > # Example:
	I0108 21:28:52.260260  240774 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 21:28:52.260264  240774 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 21:28:52.260270  240774 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 21:28:52.260275  240774 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 21:28:52.260278  240774 command_runner.go:130] > # cpuset = 0
	I0108 21:28:52.260282  240774 command_runner.go:130] > # cpushares = "0-1"
	I0108 21:28:52.260286  240774 command_runner.go:130] > # Where:
	I0108 21:28:52.260290  240774 command_runner.go:130] > # The workload name is workload-type.
	I0108 21:28:52.260298  240774 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 21:28:52.260303  240774 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 21:28:52.260310  240774 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 21:28:52.260321  240774 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 21:28:52.260326  240774 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 21:28:52.260330  240774 command_runner.go:130] > # 
	I0108 21:28:52.260336  240774 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 21:28:52.260339  240774 command_runner.go:130] > #
	I0108 21:28:52.260346  240774 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 21:28:52.260352  240774 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 21:28:52.260358  240774 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 21:28:52.260363  240774 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 21:28:52.260369  240774 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 21:28:52.260373  240774 command_runner.go:130] > [crio.image]
	I0108 21:28:52.260378  240774 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 21:28:52.260382  240774 command_runner.go:130] > # default_transport = "docker://"
	I0108 21:28:52.260388  240774 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 21:28:52.260394  240774 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:28:52.260398  240774 command_runner.go:130] > # global_auth_file = ""
	I0108 21:28:52.260402  240774 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 21:28:52.260407  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:28:52.260412  240774 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 21:28:52.260418  240774 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 21:28:52.260423  240774 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:28:52.260428  240774 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:28:52.260432  240774 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 21:28:52.260438  240774 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 21:28:52.260443  240774 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 21:28:52.260449  240774 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 21:28:52.260455  240774 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 21:28:52.260460  240774 command_runner.go:130] > # pause_command = "/pause"
	I0108 21:28:52.260469  240774 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 21:28:52.260479  240774 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 21:28:52.260488  240774 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 21:28:52.260495  240774 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 21:28:52.260501  240774 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 21:28:52.260505  240774 command_runner.go:130] > # signature_policy = ""
	I0108 21:28:52.260517  240774 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 21:28:52.260527  240774 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 21:28:52.260534  240774 command_runner.go:130] > # changing them here.
	I0108 21:28:52.260541  240774 command_runner.go:130] > # insecure_registries = [
	I0108 21:28:52.260546  240774 command_runner.go:130] > # ]
	I0108 21:28:52.260557  240774 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 21:28:52.260566  240774 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 21:28:52.260574  240774 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 21:28:52.260583  240774 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 21:28:52.260591  240774 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 21:28:52.260601  240774 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 21:28:52.260608  240774 command_runner.go:130] > # CNI plugins.
	I0108 21:28:52.260614  240774 command_runner.go:130] > [crio.network]
	I0108 21:28:52.260624  240774 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 21:28:52.260631  240774 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 21:28:52.260635  240774 command_runner.go:130] > # cni_default_network = ""
	I0108 21:28:52.260640  240774 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 21:28:52.260645  240774 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 21:28:52.260650  240774 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 21:28:52.260654  240774 command_runner.go:130] > # plugin_dirs = [
	I0108 21:28:52.260658  240774 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 21:28:52.260661  240774 command_runner.go:130] > # ]
	I0108 21:28:52.260667  240774 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 21:28:52.260673  240774 command_runner.go:130] > [crio.metrics]
	I0108 21:28:52.260681  240774 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 21:28:52.260689  240774 command_runner.go:130] > # enable_metrics = false
	I0108 21:28:52.260697  240774 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 21:28:52.260705  240774 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 21:28:52.260715  240774 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 21:28:52.260725  240774 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 21:28:52.260733  240774 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 21:28:52.260738  240774 command_runner.go:130] > # metrics_collectors = [
	I0108 21:28:52.260741  240774 command_runner.go:130] > # 	"operations",
	I0108 21:28:52.260746  240774 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 21:28:52.260751  240774 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 21:28:52.260755  240774 command_runner.go:130] > # 	"operations_errors",
	I0108 21:28:52.260759  240774 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 21:28:52.260763  240774 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 21:28:52.260767  240774 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 21:28:52.260771  240774 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 21:28:52.260777  240774 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 21:28:52.260781  240774 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 21:28:52.260785  240774 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 21:28:52.260789  240774 command_runner.go:130] > # 	"containers_oom_total",
	I0108 21:28:52.260793  240774 command_runner.go:130] > # 	"containers_oom",
	I0108 21:28:52.260797  240774 command_runner.go:130] > # 	"processes_defunct",
	I0108 21:28:52.260801  240774 command_runner.go:130] > # 	"operations_total",
	I0108 21:28:52.260805  240774 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 21:28:52.260809  240774 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 21:28:52.260813  240774 command_runner.go:130] > # 	"operations_errors_total",
	I0108 21:28:52.260817  240774 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 21:28:52.260822  240774 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 21:28:52.260826  240774 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 21:28:52.260830  240774 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 21:28:52.260834  240774 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 21:28:52.260838  240774 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 21:28:52.260841  240774 command_runner.go:130] > # ]
	I0108 21:28:52.260846  240774 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 21:28:52.260850  240774 command_runner.go:130] > # metrics_port = 9090
	I0108 21:28:52.260855  240774 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 21:28:52.260859  240774 command_runner.go:130] > # metrics_socket = ""
	I0108 21:28:52.260864  240774 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 21:28:52.260870  240774 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 21:28:52.260876  240774 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 21:28:52.260880  240774 command_runner.go:130] > # certificate on any modification event.
	I0108 21:28:52.260884  240774 command_runner.go:130] > # metrics_cert = ""
	I0108 21:28:52.260889  240774 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 21:28:52.260894  240774 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 21:28:52.260897  240774 command_runner.go:130] > # metrics_key = ""
	I0108 21:28:52.260903  240774 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 21:28:52.260906  240774 command_runner.go:130] > [crio.tracing]
	I0108 21:28:52.260911  240774 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 21:28:52.260916  240774 command_runner.go:130] > # enable_tracing = false
	I0108 21:28:52.260921  240774 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 21:28:52.260926  240774 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 21:28:52.260930  240774 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 21:28:52.260935  240774 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 21:28:52.260940  240774 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 21:28:52.260944  240774 command_runner.go:130] > [crio.stats]
	I0108 21:28:52.260950  240774 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 21:28:52.260956  240774 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 21:28:52.260960  240774 command_runner.go:130] > # stats_collection_period = 0
	I0108 21:28:52.261009  240774 command_runner.go:130] ! time="2024-01-08 21:28:52.254872694Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 21:28:52.261028  240774 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 21:28:52.261091  240774 cni.go:84] Creating CNI manager for ""
	I0108 21:28:52.261097  240774 cni.go:136] 2 nodes found, recommending kindnet
	I0108 21:28:52.261109  240774 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:28:52.261130  240774 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-379549 NodeName:multinode-379549-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:28:52.261242  240774 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-379549-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:28:52.261294  240774 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-379549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-379549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:28:52.261347  240774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:28:52.268752  240774 command_runner.go:130] > kubeadm
	I0108 21:28:52.268771  240774 command_runner.go:130] > kubectl
	I0108 21:28:52.268776  240774 command_runner.go:130] > kubelet
	I0108 21:28:52.269456  240774 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:28:52.269510  240774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 21:28:52.277110  240774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 21:28:52.293311  240774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:28:52.308886  240774 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:28:52.311940  240774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:28:52.321663  240774 host.go:66] Checking if "multinode-379549" exists ...
	I0108 21:28:52.321896  240774 config.go:182] Loaded profile config "multinode-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:28:52.321948  240774 start.go:304] JoinCluster: &{Name:multinode-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-379549 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:28:52.322055  240774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 21:28:52.322106  240774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:28:52.337912  240774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:28:52.481486  240774 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fz3v4t.ns0seqm4oap4xibx --discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 
	I0108 21:28:52.485572  240774 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:28:52.485616  240774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fz3v4t.ns0seqm4oap4xibx --discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-379549-m02"
	I0108 21:28:52.518975  240774 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:28:52.546851  240774 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:28:52.546881  240774 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 21:28:52.546890  240774 command_runner.go:130] > OS: Linux
	I0108 21:28:52.546896  240774 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 21:28:52.546902  240774 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 21:28:52.546908  240774 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 21:28:52.546913  240774 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 21:28:52.546918  240774 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 21:28:52.546923  240774 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 21:28:52.546932  240774 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 21:28:52.546937  240774 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 21:28:52.546948  240774 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 21:28:52.623178  240774 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 21:28:52.623212  240774 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 21:28:52.647936  240774 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:28:52.647962  240774 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:28:52.647967  240774 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:28:52.725316  240774 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 21:28:54.736686  240774 command_runner.go:130] > This node has joined the cluster:
	I0108 21:28:54.736719  240774 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 21:28:54.736730  240774 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 21:28:54.736741  240774 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 21:28:54.739316  240774 command_runner.go:130] ! W0108 21:28:52.518585    1108 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 21:28:54.739349  240774 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 21:28:54.739373  240774 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:28:54.739405  240774 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fz3v4t.ns0seqm4oap4xibx --discovery-token-ca-cert-hash sha256:fe80ea8f0241372b35f859c8f235bcbcae49b73ca5a44c92d8472de9d18d4109 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-379549-m02": (2.253770461s)
	I0108 21:28:54.739434  240774 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 21:28:54.823281  240774 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0108 21:28:54.899593  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-379549 minikube.k8s.io/updated_at=2024_01_08T21_28_54_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:28:54.969494  240774 command_runner.go:130] > node/multinode-379549-m02 labeled
	I0108 21:28:54.972089  240774 start.go:306] JoinCluster complete in 2.650137951s
	I0108 21:28:54.972110  240774 cni.go:84] Creating CNI manager for ""
	I0108 21:28:54.972118  240774 cni.go:136] 2 nodes found, recommending kindnet
	I0108 21:28:54.972162  240774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:28:54.975652  240774 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:28:54.975681  240774 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0108 21:28:54.975691  240774 command_runner.go:130] > Device: 37h/55d	Inode: 560014      Links: 1
	I0108 21:28:54.975701  240774 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:28:54.975714  240774 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0108 21:28:54.975724  240774 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0108 21:28:54.975737  240774 command_runner.go:130] > Change: 2024-01-08 21:09:21.577697444 +0000
	I0108 21:28:54.975750  240774 command_runner.go:130] >  Birth: 2024-01-08 21:09:21.553695143 +0000
	I0108 21:28:54.975803  240774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:28:54.975815  240774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:28:54.992827  240774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:28:55.193222  240774 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:28:55.196492  240774 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:28:55.199101  240774 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:28:55.220964  240774 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:28:55.225468  240774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:28:55.225854  240774 kapi.go:59] client config for multinode-379549: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.key", CAFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:28:55.226306  240774 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:28:55.226321  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:55.226333  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:55.226342  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:55.228594  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:55.228615  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:55.228623  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:55.228629  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:55.228634  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:55.228643  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:55.228658  240774 round_trippers.go:580]     Content-Length: 291
	I0108 21:28:55.228668  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:55 GMT
	I0108 21:28:55.228680  240774 round_trippers.go:580]     Audit-Id: ebd474a1-6fa1-4a56-86e0-9b0104589748
	I0108 21:28:55.228709  240774 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"def710b2-ad1a-496c-8896-306b3bb5308c","resourceVersion":"445","creationTimestamp":"2024-01-08T21:27:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:28:55.228842  240774 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-379549" context rescaled to 1 replicas
	I0108 21:28:55.228878  240774 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:28:55.231849  240774 out.go:177] * Verifying Kubernetes components...
	I0108 21:28:55.233409  240774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:28:55.244671  240774 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:28:55.244930  240774 kapi.go:59] client config for multinode-379549: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/profiles/multinode-379549/client.key", CAFile:"/home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:28:55.245173  240774 node_ready.go:35] waiting up to 6m0s for node "multinode-379549-m02" to be "Ready" ...
	I0108 21:28:55.245244  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549-m02
	I0108 21:28:55.245252  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:55.245260  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:55.245266  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:55.247353  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:55.247373  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:55.247380  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:55 GMT
	I0108 21:28:55.247386  240774 round_trippers.go:580]     Audit-Id: a18833b3-bc5c-4477-bc91-cd2873370fb7
	I0108 21:28:55.247391  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:55.247396  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:55.247402  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:55.247410  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:55.247562  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549-m02","uid":"18149b04-6244-4349-a33c-9a9a2840e7e0","resourceVersion":"485","creationTimestamp":"2024-01-08T21:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 21:28:55.746226  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549-m02
	I0108 21:28:55.746248  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:55.746257  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:55.746264  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:55.748538  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:55.748563  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:55.748576  240774 round_trippers.go:580]     Audit-Id: f0cc66fa-725d-4481-b0b1-dcdb2ea2d928
	I0108 21:28:55.748584  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:55.748595  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:55.748605  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:55.748618  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:55.748629  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:55 GMT
	I0108 21:28:55.748729  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549-m02","uid":"18149b04-6244-4349-a33c-9a9a2840e7e0","resourceVersion":"489","creationTimestamp":"2024-01-08T21:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 21:28:56.245628  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549-m02
	I0108 21:28:56.245649  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.245657  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.245663  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.247816  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:56.247835  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.247844  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.247852  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.247860  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.247872  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.247883  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.247894  240774 round_trippers.go:580]     Audit-Id: 12966822-4627-44d6-ae51-dd1241a65f35
	I0108 21:28:56.248069  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549-m02","uid":"18149b04-6244-4349-a33c-9a9a2840e7e0","resourceVersion":"489","creationTimestamp":"2024-01-08T21:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5762 chars]
	I0108 21:28:56.745684  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549-m02
	I0108 21:28:56.745707  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.745715  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.745721  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.748130  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:56.748149  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.748157  240774 round_trippers.go:580]     Audit-Id: f4d9eebf-dee8-4149-a79b-6fea03cfc099
	I0108 21:28:56.748162  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.748167  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.748172  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.748186  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.748195  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.748336  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549-m02","uid":"18149b04-6244-4349-a33c-9a9a2840e7e0","resourceVersion":"504","creationTimestamp":"2024-01-08T21:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I0108 21:28:56.748688  240774 node_ready.go:49] node "multinode-379549-m02" has status "Ready":"True"
	I0108 21:28:56.748711  240774 node_ready.go:38] duration metric: took 1.503524574s waiting for node "multinode-379549-m02" to be "Ready" ...
	I0108 21:28:56.748721  240774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:28:56.748786  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 21:28:56.748794  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.748801  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.748807  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.751816  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:56.751836  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.751845  240774 round_trippers.go:580]     Audit-Id: 40b38b11-6262-41ee-bc7d-cb8601d5f962
	I0108 21:28:56.751853  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.751859  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.751867  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.751883  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.751891  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.752432  240774 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"441","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0108 21:28:56.754446  240774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-72pdc" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.754534  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-72pdc
	I0108 21:28:56.754543  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.754550  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.754556  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.756226  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:56.756247  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.756253  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.756261  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.756270  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.756280  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.756293  240774 round_trippers.go:580]     Audit-Id: 474ff4d7-bf79-415b-a853-3b15aa63abd6
	I0108 21:28:56.756301  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.756426  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-72pdc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e1a23fde-a3c8-4acb-b244-41f8ddfe2645","resourceVersion":"441","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccf13cb-17a5-42f5-93cd-8a7a2f07e11e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 21:28:56.756828  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:56.756840  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.756857  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.756866  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.758406  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:56.758426  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.758436  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.758451  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.758459  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.758468  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.758475  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.758481  240774 round_trippers.go:580]     Audit-Id: 2adfe668-cebf-439c-b937-16d520d8ad4d
	I0108 21:28:56.758655  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:56.758977  240774 pod_ready.go:92] pod "coredns-5dd5756b68-72pdc" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:56.758998  240774 pod_ready.go:81] duration metric: took 4.528061ms waiting for pod "coredns-5dd5756b68-72pdc" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.759008  240774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.759074  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-379549
	I0108 21:28:56.759083  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.759090  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.759100  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.760634  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:56.760652  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.760661  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.760669  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.760677  240774 round_trippers.go:580]     Audit-Id: 175c53ef-3dee-4121-9c28-c20f9a4fc6f2
	I0108 21:28:56.760686  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.760695  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.760704  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.760797  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-379549","namespace":"kube-system","uid":"15613f97-4ce1-40e7-9477-83067c6da0d5","resourceVersion":"331","creationTimestamp":"2024-01-08T21:27:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"63b952661100faa87b3a92441ecb5e45","kubernetes.io/config.mirror":"63b952661100faa87b3a92441ecb5e45","kubernetes.io/config.seen":"2024-01-08T21:27:52.971924843Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 21:28:56.761160  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:56.761173  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.761180  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.761188  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.762740  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:56.762756  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.762765  240774 round_trippers.go:580]     Audit-Id: 714b86e7-fab2-41b7-bb75-9b9ace4efc0b
	I0108 21:28:56.762774  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.762782  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.762790  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.762801  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.762810  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.762936  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:56.763237  240774 pod_ready.go:92] pod "etcd-multinode-379549" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:56.763252  240774 pod_ready.go:81] duration metric: took 4.233149ms waiting for pod "etcd-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.763269  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.763324  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-379549
	I0108 21:28:56.763334  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.763343  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.763355  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.764884  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:56.764900  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.764907  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.764914  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.764922  240774 round_trippers.go:580]     Audit-Id: 4443b40e-7836-4ebf-a530-b036cae72370
	I0108 21:28:56.764931  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.764943  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.764955  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.765055  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-379549","namespace":"kube-system","uid":"904d4735-a5db-4779-a543-37219944e6ad","resourceVersion":"298","creationTimestamp":"2024-01-08T21:27:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"69032da1baa3b087de5b0ec3fd7fdd38","kubernetes.io/config.mirror":"69032da1baa3b087de5b0ec3fd7fdd38","kubernetes.io/config.seen":"2024-01-08T21:27:52.971927859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 21:28:56.765426  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:56.765438  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.765469  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.765478  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.766991  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:56.767009  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.767019  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.767028  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.767036  240774 round_trippers.go:580]     Audit-Id: 106eee53-cd09-479e-a7e5-7ce90ffd654f
	I0108 21:28:56.767052  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.767060  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.767069  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.767159  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:56.767430  240774 pod_ready.go:92] pod "kube-apiserver-multinode-379549" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:56.767444  240774 pod_ready.go:81] duration metric: took 4.163778ms waiting for pod "kube-apiserver-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.767451  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.767489  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-379549
	I0108 21:28:56.767496  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.767503  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.767508  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.769052  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:56.769066  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.769072  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.769078  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.769083  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.769090  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.769097  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.769103  240774 round_trippers.go:580]     Audit-Id: 796d180b-ca49-45ce-a4e3-7ec7bbabe64b
	I0108 21:28:56.769224  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-379549","namespace":"kube-system","uid":"c1f82f54-8b68-4da1-bfd1-f70984dc7718","resourceVersion":"295","creationTimestamp":"2024-01-08T21:27:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8fada1f4a8cbf7faf3b4e40829defd17","kubernetes.io/config.mirror":"8fada1f4a8cbf7faf3b4e40829defd17","kubernetes.io/config.seen":"2024-01-08T21:27:52.971928939Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 21:28:56.769574  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:56.769586  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.769593  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.769601  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.770987  240774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:28:56.771005  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.771015  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.771023  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.771031  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.771039  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.771048  240774 round_trippers.go:580]     Audit-Id: dcd0979c-eda4-45a8-8a04-ff2e60164ec5
	I0108 21:28:56.771061  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.771172  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:56.771453  240774 pod_ready.go:92] pod "kube-controller-manager-multinode-379549" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:56.771468  240774 pod_ready.go:81] duration metric: took 4.009712ms waiting for pod "kube-controller-manager-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.771481  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xkts4" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:56.945839  240774 request.go:629] Waited for 174.279839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkts4
	I0108 21:28:56.945900  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkts4
	I0108 21:28:56.945907  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:56.945917  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:56.945930  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:56.948131  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:56.948150  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:56.948157  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:56.948162  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:56.948167  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:56.948173  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:56.948178  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:56 GMT
	I0108 21:28:56.948183  240774 round_trippers.go:580]     Audit-Id: 73b95bd0-048a-4628-bfb5-dc145327fcc1
	I0108 21:28:56.948276  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkts4","generateName":"kube-proxy-","namespace":"kube-system","uid":"ed915324-06b4-489a-896c-bf3c6b6e59cd","resourceVersion":"497","creationTimestamp":"2024-01-08T21:28:54Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2948c2ab-b26e-4614-b6ae-5a133350e7b7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2948c2ab-b26e-4614-b6ae-5a133350e7b7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 21:28:57.146094  240774 request.go:629] Waited for 197.354555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-379549-m02
	I0108 21:28:57.146186  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549-m02
	I0108 21:28:57.146201  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:57.146220  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:57.146228  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:57.148604  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:57.148653  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:57.148667  240774 round_trippers.go:580]     Audit-Id: 01ae98ff-580d-486c-988e-35b8348d9884
	I0108 21:28:57.148685  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:57.148698  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:57.148705  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:57.148713  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:57.148718  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:57 GMT
	I0108 21:28:57.148834  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549-m02","uid":"18149b04-6244-4349-a33c-9a9a2840e7e0","resourceVersion":"504","creationTimestamp":"2024-01-08T21:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I0108 21:28:57.149272  240774 pod_ready.go:92] pod "kube-proxy-xkts4" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:57.149294  240774 pod_ready.go:81] duration metric: took 377.805622ms waiting for pod "kube-proxy-xkts4" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:57.149314  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zqbsv" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:57.346101  240774 request.go:629] Waited for 196.709987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zqbsv
	I0108 21:28:57.346195  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zqbsv
	I0108 21:28:57.346207  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:57.346219  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:57.346227  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:57.348546  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:57.348564  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:57.348571  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:57.348580  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:57.348590  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:57.348598  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:57 GMT
	I0108 21:28:57.348607  240774 round_trippers.go:580]     Audit-Id: 0aabb5e8-9c03-4b27-ba8f-141286cb2b53
	I0108 21:28:57.348615  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:57.348736  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zqbsv","generateName":"kube-proxy-","namespace":"kube-system","uid":"44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9","resourceVersion":"398","creationTimestamp":"2024-01-08T21:28:06Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2948c2ab-b26e-4614-b6ae-5a133350e7b7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2948c2ab-b26e-4614-b6ae-5a133350e7b7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 21:28:57.546644  240774 request.go:629] Waited for 197.362846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:57.546704  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:57.546709  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:57.546719  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:57.546728  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:57.549014  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:57.549036  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:57.549046  240774 round_trippers.go:580]     Audit-Id: baeef0fa-ff65-4cc7-922e-f28bc6ce8cfe
	I0108 21:28:57.549055  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:57.549064  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:57.549073  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:57.549082  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:57.549091  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:57 GMT
	I0108 21:28:57.549216  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:57.549594  240774 pod_ready.go:92] pod "kube-proxy-zqbsv" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:57.549614  240774 pod_ready.go:81] duration metric: took 400.292014ms waiting for pod "kube-proxy-zqbsv" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:57.549643  240774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:57.746500  240774 request.go:629] Waited for 196.765474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-379549
	I0108 21:28:57.746579  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-379549
	I0108 21:28:57.746585  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:57.746593  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:57.746599  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:57.748917  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:57.748936  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:57.748944  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:57.748950  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:57 GMT
	I0108 21:28:57.748956  240774 round_trippers.go:580]     Audit-Id: 71decf3f-bbbe-45d5-bcd4-2672c70cdd7f
	I0108 21:28:57.748961  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:57.748966  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:57.748972  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:57.749111  240774 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-379549","namespace":"kube-system","uid":"8c5d7d7d-f49a-427d-b2c2-72db08b9934f","resourceVersion":"326","creationTimestamp":"2024-01-08T21:27:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"43bec5e36c6ff7f4f3ffbf0511cc280c","kubernetes.io/config.mirror":"43bec5e36c6ff7f4f3ffbf0511cc280c","kubernetes.io/config.seen":"2024-01-08T21:27:47.325897436Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 21:28:57.945865  240774 request.go:629] Waited for 196.272919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:57.945941  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-379549
	I0108 21:28:57.945947  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:57.945955  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:57.945969  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:57.948231  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:57.948252  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:57.948260  240774 round_trippers.go:580]     Audit-Id: db06b72c-0363-45c3-995b-cdf96bb7885e
	I0108 21:28:57.948265  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:57.948276  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:57.948282  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:57.948288  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:57.948293  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:57 GMT
	I0108 21:28:57.948394  240774 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:27:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 21:28:57.948757  240774 pod_ready.go:92] pod "kube-scheduler-multinode-379549" in "kube-system" namespace has status "Ready":"True"
	I0108 21:28:57.948773  240774 pod_ready.go:81] duration metric: took 399.120002ms waiting for pod "kube-scheduler-multinode-379549" in "kube-system" namespace to be "Ready" ...
	I0108 21:28:57.948783  240774 pod_ready.go:38] duration metric: took 1.200048919s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:28:57.948802  240774 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:28:57.948862  240774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:28:57.959622  240774 system_svc.go:56] duration metric: took 10.810163ms WaitForService to wait for kubelet.
	I0108 21:28:57.959650  240774 kubeadm.go:581] duration metric: took 2.730742824s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:28:57.959677  240774 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:28:58.146142  240774 request.go:629] Waited for 186.368204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 21:28:58.146200  240774 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 21:28:58.146204  240774 round_trippers.go:469] Request Headers:
	I0108 21:28:58.146212  240774 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:28:58.146221  240774 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:28:58.148503  240774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:28:58.148527  240774 round_trippers.go:577] Response Headers:
	I0108 21:28:58.148538  240774 round_trippers.go:580]     Audit-Id: 2bbb0cbd-43c1-4364-8bd9-bd65d8438fcd
	I0108 21:28:58.148547  240774 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:28:58.148555  240774 round_trippers.go:580]     Content-Type: application/json
	I0108 21:28:58.148563  240774 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8ca01e75-5a12-46df-8ec5-3b982ff6f130
	I0108 21:28:58.148573  240774 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8a8beea-a6e3-4c3a-be4a-220cda3acc0d
	I0108 21:28:58.148585  240774 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:28:58 GMT
	I0108 21:28:58.148855  240774 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"multinode-379549","uid":"7567b833-89ee-4e73-888a-9952f5e20e72","resourceVersion":"422","creationTimestamp":"2024-01-08T21:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-379549","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-379549","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_27_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I0108 21:28:58.149593  240774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:28:58.149617  240774 node_conditions.go:123] node cpu capacity is 8
	I0108 21:28:58.149639  240774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:28:58.149649  240774 node_conditions.go:123] node cpu capacity is 8
	I0108 21:28:58.149658  240774 node_conditions.go:105] duration metric: took 189.974081ms to run NodePressure ...
	I0108 21:28:58.149673  240774 start.go:228] waiting for startup goroutines ...
	I0108 21:28:58.149706  240774 start.go:242] writing updated cluster config ...
	I0108 21:28:58.150052  240774 ssh_runner.go:195] Run: rm -f paused
	I0108 21:28:58.197919  240774 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:28:58.201303  240774 out.go:177] * Done! kubectl is now configured to use "multinode-379549" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 21:28:38 multinode-379549 crio[959]: time="2024-01-08 21:28:38.076330207Z" level=info msg="Starting container: b2b5c2c688833c2487e316e0220c1c1ecc21cd5f3c8d04a045e365cc23ac9dc7" id=c250b33d-1a19-4c2e-85ce-67586c05eb1c name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 21:28:38 multinode-379549 crio[959]: time="2024-01-08 21:28:38.082558533Z" level=info msg="Started container" PID=2320 containerID=b2b5c2c688833c2487e316e0220c1c1ecc21cd5f3c8d04a045e365cc23ac9dc7 description=kube-system/storage-provisioner/storage-provisioner id=c250b33d-1a19-4c2e-85ce-67586c05eb1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=be3454160c76e0e13b7b6527f36ef4b7b4e5c3138ae42534a286d491a0912363
	Jan 08 21:28:38 multinode-379549 crio[959]: time="2024-01-08 21:28:38.086233763Z" level=info msg="Created container 0268ce11be94beef1db2731cff4147c8ae3456be14eb7188d10e0283f0ad59d8: kube-system/coredns-5dd5756b68-72pdc/coredns" id=8ac5540d-8ee1-44fc-a6c6-e6b350d30b65 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 21:28:38 multinode-379549 crio[959]: time="2024-01-08 21:28:38.086801253Z" level=info msg="Starting container: 0268ce11be94beef1db2731cff4147c8ae3456be14eb7188d10e0283f0ad59d8" id=5808dea8-9fc2-445f-a031-6480f9e14363 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 21:28:38 multinode-379549 crio[959]: time="2024-01-08 21:28:38.115176109Z" level=info msg="Started container" PID=2338 containerID=0268ce11be94beef1db2731cff4147c8ae3456be14eb7188d10e0283f0ad59d8 description=kube-system/coredns-5dd5756b68-72pdc/coredns id=5808dea8-9fc2-445f-a031-6480f9e14363 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fdf2f208c2f409e6c1aff04f2b75973b20eb52bcbd893f770f81f1e2587780b3
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.211458462Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-hncds/POD" id=2afdd562-1e8f-4589-b945-061c83b859eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.211530296Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.226008588Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-hncds Namespace:default ID:1cf8a5c4be186118dd21cb1e7eea3beefff5b61b0cb6abe29be2d2b9fc01e49c UID:1d91bdc2-729e-4815-871a-6371c80144d4 NetNS:/var/run/netns/677b277d-4b1d-443a-a8cf-8fc8a926f427 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.226047706Z" level=info msg="Adding pod default_busybox-5bc68d56bd-hncds to CNI network \"kindnet\" (type=ptp)"
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.234709523Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-hncds Namespace:default ID:1cf8a5c4be186118dd21cb1e7eea3beefff5b61b0cb6abe29be2d2b9fc01e49c UID:1d91bdc2-729e-4815-871a-6371c80144d4 NetNS:/var/run/netns/677b277d-4b1d-443a-a8cf-8fc8a926f427 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.234840043Z" level=info msg="Checking pod default_busybox-5bc68d56bd-hncds for CNI network kindnet (type=ptp)"
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.260817134Z" level=info msg="Ran pod sandbox 1cf8a5c4be186118dd21cb1e7eea3beefff5b61b0cb6abe29be2d2b9fc01e49c with infra container: default/busybox-5bc68d56bd-hncds/POD" id=2afdd562-1e8f-4589-b945-061c83b859eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.261902945Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=b65c7cdd-11bc-4a9c-82f4-8c2d25d1dd49 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.262114950Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=b65c7cdd-11bc-4a9c-82f4-8c2d25d1dd49 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.262920588Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=9bba4458-d5d0-4750-9f80-edf40422c265 name=/runtime.v1.ImageService/PullImage
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.266406878Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.408033982Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.788780430Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=9bba4458-d5d0-4750-9f80-edf40422c265 name=/runtime.v1.ImageService/PullImage
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.789813378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=32aaee04-f2a9-4519-a646-a9e52cba7fc2 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.791197741Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=32aaee04-f2a9-4519-a646-a9e52cba7fc2 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.792068721Z" level=info msg="Creating container: default/busybox-5bc68d56bd-hncds/busybox" id=50bb9a05-8931-4a43-8b09-d4603eef66bd name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.792184292Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.862314277Z" level=info msg="Created container 225b38eecb34cf0e6116822fab101d2ce4ff57f564921383c38d5d840877c18f: default/busybox-5bc68d56bd-hncds/busybox" id=50bb9a05-8931-4a43-8b09-d4603eef66bd name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.862931466Z" level=info msg="Starting container: 225b38eecb34cf0e6116822fab101d2ce4ff57f564921383c38d5d840877c18f" id=7792218c-6e7d-4423-b899-754a65f23ce7 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 21:28:59 multinode-379549 crio[959]: time="2024-01-08 21:28:59.870192975Z" level=info msg="Started container" PID=2513 containerID=225b38eecb34cf0e6116822fab101d2ce4ff57f564921383c38d5d840877c18f description=default/busybox-5bc68d56bd-hncds/busybox id=7792218c-6e7d-4423-b899-754a65f23ce7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cf8a5c4be186118dd21cb1e7eea3beefff5b61b0cb6abe29be2d2b9fc01e49c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	225b38eecb34c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   1cf8a5c4be186       busybox-5bc68d56bd-hncds
	0268ce11be94b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      25 seconds ago       Running             coredns                   0                   fdf2f208c2f40       coredns-5dd5756b68-72pdc
	b2b5c2c688833       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      25 seconds ago       Running             storage-provisioner       0                   be3454160c76e       storage-provisioner
	a64795f43d293       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      57 seconds ago       Running             kube-proxy                0                   d7d2c4b605881       kube-proxy-zqbsv
	393754d450559       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      57 seconds ago       Running             kindnet-cni               0                   a9088b0c69f9d       kindnet-982tk
	c0e5cf479b049       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   9ddf65470c314       kube-apiserver-multinode-379549
	216df8b39828d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   85612a7b0e924       etcd-multinode-379549
	664f9790a5bd4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   730a12a2c6663       kube-controller-manager-multinode-379549
	7e968ac989f81       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   0addaedb50ec5       kube-scheduler-multinode-379549
	
	
	==> coredns [0268ce11be94beef1db2731cff4147c8ae3456be14eb7188d10e0283f0ad59d8] <==
	[INFO] 10.244.1.2:47143 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069514s
	[INFO] 10.244.0.3:49431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122031s
	[INFO] 10.244.0.3:45919 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001418522s
	[INFO] 10.244.0.3:57894 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080131s
	[INFO] 10.244.0.3:51713 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085211s
	[INFO] 10.244.0.3:59311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001018005s
	[INFO] 10.244.0.3:47045 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000053174s
	[INFO] 10.244.0.3:40865 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075088s
	[INFO] 10.244.0.3:46746 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004754s
	[INFO] 10.244.1.2:47250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120661s
	[INFO] 10.244.1.2:58356 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009237s
	[INFO] 10.244.1.2:39345 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048814s
	[INFO] 10.244.1.2:58458 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057569s
	[INFO] 10.244.0.3:52543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148677s
	[INFO] 10.244.0.3:39844 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075829s
	[INFO] 10.244.0.3:41584 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063655s
	[INFO] 10.244.0.3:48940 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054727s
	[INFO] 10.244.1.2:58626 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123895s
	[INFO] 10.244.1.2:51982 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010539s
	[INFO] 10.244.1.2:60568 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080435s
	[INFO] 10.244.1.2:36689 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081046s
	[INFO] 10.244.0.3:46425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099786s
	[INFO] 10.244.0.3:41596 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000068399s
	[INFO] 10.244.0.3:46217 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073207s
	[INFO] 10.244.0.3:44097 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059864s
	
	
	==> describe nodes <==
	Name:               multinode-379549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-379549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-379549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_27_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:27:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-379549
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:28:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:28:37 +0000   Mon, 08 Jan 2024 21:27:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:28:37 +0000   Mon, 08 Jan 2024 21:27:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:28:37 +0000   Mon, 08 Jan 2024 21:27:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:28:37 +0000   Mon, 08 Jan 2024 21:28:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-379549
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a3d0f1af93e4664b9715a651337ad9e
	  System UUID:                c9935c79-119b-44c7-8e9a-4ea6292fedb2
	  Boot ID:                    b9c55cc6-3d64-43dc-b6f4-c38d0ea8cf14
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-hncds                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-72pdc                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     57s
	  kube-system                 etcd-multinode-379549                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kindnet-982tk                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      57s
	  kube-system                 kube-apiserver-multinode-379549             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-multinode-379549    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-proxy-zqbsv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-multinode-379549             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node multinode-379549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node multinode-379549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x8 over 76s)  kubelet          Node multinode-379549 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node multinode-379549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node multinode-379549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet          Node multinode-379549 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node multinode-379549 event: Registered Node multinode-379549 in Controller
	  Normal  NodeReady                26s                kubelet          Node multinode-379549 status is now: NodeReady
	
	
	Name:               multinode-379549-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-379549-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-379549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_28_54_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:28:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-379549-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:28:56 +0000   Mon, 08 Jan 2024 21:28:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:28:56 +0000   Mon, 08 Jan 2024 21:28:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:28:56 +0000   Mon, 08 Jan 2024 21:28:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:28:56 +0000   Mon, 08 Jan 2024 21:28:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-379549-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac61d11dae6042d8b774b22907084e78
	  System UUID:                4cada4f8-4a95-4dbd-992b-c8fb9bb9047c
	  Boot ID:                    b9c55cc6-3d64-43dc-b6f4-c38d0ea8cf14
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-dmq2z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-6g48k               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-xkts4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 8s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9s (x5 over 10s)  kubelet          Node multinode-379549-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 10s)  kubelet          Node multinode-379549-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 10s)  kubelet          Node multinode-379549-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                node-controller  Node multinode-379549-m02 event: Registered Node multinode-379549-m02 in Controller
	  Normal  NodeReady                7s                kubelet          Node multinode-379549-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004912] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006564] FS-Cache: N-cookie d=00000000c3b3813c{9p.inode} n=0000000013361d1e
	[  +0.007373] FS-Cache: N-key=[8] 'eaa40f0200000000'
	[  +0.298020] FS-Cache: Duplicate cookie detected
	[  +0.004718] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006750] FS-Cache: O-cookie d=00000000c3b3813c{9p.inode} n=00000000cef9bdc0
	[  +0.007344] FS-Cache: O-key=[8] 'f6a40f0200000000'
	[  +0.004918] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006650] FS-Cache: N-cookie d=00000000c3b3813c{9p.inode} n=000000000a10e56e
	[  +0.008810] FS-Cache: N-key=[8] 'f6a40f0200000000'
	[ +16.807296] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 8 21:19] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[  +1.020088] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[Jan 8 21:20] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[  +4.191572] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[  +8.195208] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[ +16.122464] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	[Jan 8 21:21] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 b1 d2 c8 ab 41 b6 f8 b3 70 ff ae 08 00
	
	
	==> etcd [216df8b39828db925203ab7ab062d4a0e0144f6378ba87bbc4f6f7b813acb4d2] <==
	{"level":"info","ts":"2024-01-08T21:27:48.05061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-08T21:27:48.11438Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-08T21:27:48.11616Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T21:27:48.116737Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T21:27:48.116787Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:27:48.116302Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T21:27:48.116825Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T21:27:48.342121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T21:27:48.34226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T21:27:48.342314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-08T21:27:48.342339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:27:48.342348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T21:27:48.342358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T21:27:48.342368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T21:27:48.344588Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:27:48.345288Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-379549 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:27:48.345321Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:27:48.345434Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:27:48.345584Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:27:48.345666Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:27:48.345628Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:27:48.345909Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:27:48.345968Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:27:48.346787Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:27:48.347719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	
	
	==> kernel <==
	 21:29:03 up  4:11,  0 users,  load average: 1.00, 1.12, 1.41
	Linux multinode-379549 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [393754d45055981a7d17c99e68705d936750803552b191182e4d96b7ecd7cdc5] <==
	I0108 21:28:07.014057       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 21:28:07.014169       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0108 21:28:07.014380       1 main.go:116] setting mtu 1500 for CNI 
	I0108 21:28:07.014408       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 21:28:07.014432       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 21:28:37.233689       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0108 21:28:37.241178       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 21:28:37.241202       1 main.go:227] handling current node
	I0108 21:28:47.252658       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 21:28:47.252683       1 main.go:227] handling current node
	I0108 21:28:57.265535       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 21:28:57.265558       1 main.go:227] handling current node
	I0108 21:28:57.265567       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 21:28:57.265572       1 main.go:250] Node multinode-379549-m02 has CIDR [10.244.1.0/24] 
	I0108 21:28:57.265728       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [c0e5cf479b049cc3a91b75f42ad046d99966a8946acfcb7b4ae874dbde1610c6] <==
	I0108 21:27:50.127528       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:27:50.128412       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 21:27:50.129079       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 21:27:50.129166       1 aggregator.go:166] initial CRD sync complete...
	I0108 21:27:50.129202       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 21:27:50.129233       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 21:27:50.129268       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:27:50.130068       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 21:27:50.213603       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 21:27:50.223184       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:27:51.030973       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:27:51.035464       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:27:51.035478       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:27:51.395908       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:27:51.427731       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:27:51.540510       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 21:27:51.547747       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0108 21:27:51.548672       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 21:27:51.553220       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:27:52.127876       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:27:52.920895       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:27:52.931636       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 21:27:52.941477       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 21:28:06.080440       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:28:06.133896       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [664f9790a5bd43e1de55101aed5b57773329a6cc6d2c1e2c029ad5fc8b026f4b] <==
	I0108 21:28:37.711635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.584µs"
	I0108 21:28:39.131487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="204.998µs"
	I0108 21:28:39.153206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.963371ms"
	I0108 21:28:39.153308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.483µs"
	I0108 21:28:40.334670       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0108 21:28:54.668951       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-379549-m02\" does not exist"
	I0108 21:28:54.674954       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-379549-m02" podCIDRs=["10.244.1.0/24"]
	I0108 21:28:54.679203       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6g48k"
	I0108 21:28:54.679298       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xkts4"
	I0108 21:28:55.335462       1 event.go:307] "Event occurred" object="multinode-379549-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-379549-m02 event: Registered Node multinode-379549-m02 in Controller"
	I0108 21:28:55.335607       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-379549-m02"
	I0108 21:28:56.420775       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-379549-m02"
	I0108 21:28:58.893345       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 21:28:58.899947       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-dmq2z"
	I0108 21:28:58.904282       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-hncds"
	I0108 21:28:58.909995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.943573ms"
	I0108 21:28:58.914744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.69922ms"
	I0108 21:28:58.914838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.563µs"
	I0108 21:28:58.920809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="70.975µs"
	I0108 21:28:58.921239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.881µs"
	I0108 21:29:00.176743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.752866ms"
	I0108 21:29:00.176831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.958µs"
	I0108 21:29:00.204269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.443441ms"
	I0108 21:29:00.204379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.147µs"
	I0108 21:29:00.345393       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-dmq2z" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-dmq2z"
	
	
	==> kube-proxy [a64795f43d2937c34766aee972f40b04efcfd03389318e692d1ebb4412472e66] <==
	I0108 21:28:07.024636       1 server_others.go:69] "Using iptables proxy"
	I0108 21:28:07.033173       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0108 21:28:07.129913       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 21:28:07.131897       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:28:07.131928       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 21:28:07.131939       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 21:28:07.131981       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:28:07.132181       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:28:07.132192       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:28:07.132953       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:28:07.132971       1 config.go:315] "Starting node config controller"
	I0108 21:28:07.132984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:28:07.132982       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:28:07.133013       1 config.go:188] "Starting service config controller"
	I0108 21:28:07.133018       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:28:07.233156       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:28:07.233175       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:28:07.233160       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7e968ac989f818562b565d5bb90059b852c922631910c6e24756f890469abbdd] <==
	W0108 21:27:50.216783       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:27:50.217125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:27:50.217436       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:27:50.217494       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:27:50.217511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:27:50.217537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:27:50.217564       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:27:50.217584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:27:50.217646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:27:50.217661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:27:50.218817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:27:50.218845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:27:51.022895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:27:51.022928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:27:51.109748       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:27:51.109779       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:27:51.143529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:27:51.143560       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:27:51.152024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:27:51.152059       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:27:51.166135       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:27:51.166160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 21:27:51.304716       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:27:51.304755       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 21:27:53.934993       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 21:28:06 multinode-379549 kubelet[1586]: I0108 21:28:06.213990    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s24rb\" (UniqueName: \"kubernetes.io/projected/382d7096-e18b-45fc-a98a-df05c243ffeb-kube-api-access-s24rb\") pod \"kindnet-982tk\" (UID: \"382d7096-e18b-45fc-a98a-df05c243ffeb\") " pod="kube-system/kindnet-982tk"
	Jan 08 21:28:06 multinode-379549 kubelet[1586]: I0108 21:28:06.214026    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9-xtables-lock\") pod \"kube-proxy-zqbsv\" (UID: \"44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9\") " pod="kube-system/kube-proxy-zqbsv"
	Jan 08 21:28:06 multinode-379549 kubelet[1586]: I0108 21:28:06.214060    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/382d7096-e18b-45fc-a98a-df05c243ffeb-xtables-lock\") pod \"kindnet-982tk\" (UID: \"382d7096-e18b-45fc-a98a-df05c243ffeb\") " pod="kube-system/kindnet-982tk"
	Jan 08 21:28:06 multinode-379549 kubelet[1586]: I0108 21:28:06.214087    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9-lib-modules\") pod \"kube-proxy-zqbsv\" (UID: \"44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9\") " pod="kube-system/kube-proxy-zqbsv"
	Jan 08 21:28:06 multinode-379549 kubelet[1586]: I0108 21:28:06.214114    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/382d7096-e18b-45fc-a98a-df05c243ffeb-lib-modules\") pod \"kindnet-982tk\" (UID: \"382d7096-e18b-45fc-a98a-df05c243ffeb\") " pod="kube-system/kindnet-982tk"
	Jan 08 21:28:06 multinode-379549 kubelet[1586]: I0108 21:28:06.214137    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9-kube-proxy\") pod \"kube-proxy-zqbsv\" (UID: \"44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9\") " pod="kube-system/kube-proxy-zqbsv"
	Jan 08 21:28:06 multinode-379549 kubelet[1586]: I0108 21:28:06.214166    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv45r\" (UniqueName: \"kubernetes.io/projected/44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9-kube-api-access-xv45r\") pod \"kube-proxy-zqbsv\" (UID: \"44731b94-fdd2-41ae-9b2e-44e8eb5ca2a9\") " pod="kube-system/kube-proxy-zqbsv"
	Jan 08 21:28:06 multinode-379549 kubelet[1586]: W0108 21:28:06.614631    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/crio-d7d2c4b60588133b01d3d7276cb882096325bf2656876a9dab63af9e917d5e21 WatchSource:0}: Error finding container d7d2c4b60588133b01d3d7276cb882096325bf2656876a9dab63af9e917d5e21: Status 404 returned error can't find the container with id d7d2c4b60588133b01d3d7276cb882096325bf2656876a9dab63af9e917d5e21
	Jan 08 21:28:07 multinode-379549 kubelet[1586]: I0108 21:28:07.069749    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zqbsv" podStartSLOduration=1.06970201 podCreationTimestamp="2024-01-08 21:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:28:07.069462769 +0000 UTC m=+14.173363757" watchObservedRunningTime="2024-01-08 21:28:07.06970201 +0000 UTC m=+14.173603005"
	Jan 08 21:28:07 multinode-379549 kubelet[1586]: I0108 21:28:07.114113    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-982tk" podStartSLOduration=1.114062038 podCreationTimestamp="2024-01-08 21:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:28:07.11395692 +0000 UTC m=+14.217857932" watchObservedRunningTime="2024-01-08 21:28:07.114062038 +0000 UTC m=+14.217963031"
	Jan 08 21:28:37 multinode-379549 kubelet[1586]: I0108 21:28:37.672813    1586 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 21:28:37 multinode-379549 kubelet[1586]: I0108 21:28:37.694978    1586 topology_manager.go:215] "Topology Admit Handler" podUID="c2b077b4-019f-4d60-950e-5f924b4cacb4" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 21:28:37 multinode-379549 kubelet[1586]: I0108 21:28:37.696734    1586 topology_manager.go:215] "Topology Admit Handler" podUID="e1a23fde-a3c8-4acb-b244-41f8ddfe2645" podNamespace="kube-system" podName="coredns-5dd5756b68-72pdc"
	Jan 08 21:28:37 multinode-379549 kubelet[1586]: I0108 21:28:37.830381    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kss9s\" (UniqueName: \"kubernetes.io/projected/e1a23fde-a3c8-4acb-b244-41f8ddfe2645-kube-api-access-kss9s\") pod \"coredns-5dd5756b68-72pdc\" (UID: \"e1a23fde-a3c8-4acb-b244-41f8ddfe2645\") " pod="kube-system/coredns-5dd5756b68-72pdc"
	Jan 08 21:28:37 multinode-379549 kubelet[1586]: I0108 21:28:37.830428    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c2b077b4-019f-4d60-950e-5f924b4cacb4-tmp\") pod \"storage-provisioner\" (UID: \"c2b077b4-019f-4d60-950e-5f924b4cacb4\") " pod="kube-system/storage-provisioner"
	Jan 08 21:28:37 multinode-379549 kubelet[1586]: I0108 21:28:37.830452    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1a23fde-a3c8-4acb-b244-41f8ddfe2645-config-volume\") pod \"coredns-5dd5756b68-72pdc\" (UID: \"e1a23fde-a3c8-4acb-b244-41f8ddfe2645\") " pod="kube-system/coredns-5dd5756b68-72pdc"
	Jan 08 21:28:37 multinode-379549 kubelet[1586]: I0108 21:28:37.830471    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-przck\" (UniqueName: \"kubernetes.io/projected/c2b077b4-019f-4d60-950e-5f924b4cacb4-kube-api-access-przck\") pod \"storage-provisioner\" (UID: \"c2b077b4-019f-4d60-950e-5f924b4cacb4\") " pod="kube-system/storage-provisioner"
	Jan 08 21:28:38 multinode-379549 kubelet[1586]: W0108 21:28:38.026333    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/crio-be3454160c76e0e13b7b6527f36ef4b7b4e5c3138ae42534a286d491a0912363 WatchSource:0}: Error finding container be3454160c76e0e13b7b6527f36ef4b7b4e5c3138ae42534a286d491a0912363: Status 404 returned error can't find the container with id be3454160c76e0e13b7b6527f36ef4b7b4e5c3138ae42534a286d491a0912363
	Jan 08 21:28:38 multinode-379549 kubelet[1586]: W0108 21:28:38.030066    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/crio-fdf2f208c2f409e6c1aff04f2b75973b20eb52bcbd893f770f81f1e2587780b3 WatchSource:0}: Error finding container fdf2f208c2f409e6c1aff04f2b75973b20eb52bcbd893f770f81f1e2587780b3: Status 404 returned error can't find the container with id fdf2f208c2f409e6c1aff04f2b75973b20eb52bcbd893f770f81f1e2587780b3
	Jan 08 21:28:39 multinode-379549 kubelet[1586]: I0108 21:28:39.131326    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.131277805 podCreationTimestamp="2024-01-08 21:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:28:38.13189026 +0000 UTC m=+45.235791251" watchObservedRunningTime="2024-01-08 21:28:39.131277805 +0000 UTC m=+46.235178829"
	Jan 08 21:28:39 multinode-379549 kubelet[1586]: I0108 21:28:39.131431    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-72pdc" podStartSLOduration=33.131401481 podCreationTimestamp="2024-01-08 21:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:28:39.131185968 +0000 UTC m=+46.235086976" watchObservedRunningTime="2024-01-08 21:28:39.131401481 +0000 UTC m=+46.235302486"
	Jan 08 21:28:58 multinode-379549 kubelet[1586]: I0108 21:28:58.910145    1586 topology_manager.go:215] "Topology Admit Handler" podUID="1d91bdc2-729e-4815-871a-6371c80144d4" podNamespace="default" podName="busybox-5bc68d56bd-hncds"
	Jan 08 21:28:59 multinode-379549 kubelet[1586]: I0108 21:28:59.032954    1586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnz8s\" (UniqueName: \"kubernetes.io/projected/1d91bdc2-729e-4815-871a-6371c80144d4-kube-api-access-jnz8s\") pod \"busybox-5bc68d56bd-hncds\" (UID: \"1d91bdc2-729e-4815-871a-6371c80144d4\") " pod="default/busybox-5bc68d56bd-hncds"
	Jan 08 21:28:59 multinode-379549 kubelet[1586]: W0108 21:28:59.258434    1586 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/crio-1cf8a5c4be186118dd21cb1e7eea3beefff5b61b0cb6abe29be2d2b9fc01e49c WatchSource:0}: Error finding container 1cf8a5c4be186118dd21cb1e7eea3beefff5b61b0cb6abe29be2d2b9fc01e49c: Status 404 returned error can't find the container with id 1cf8a5c4be186118dd21cb1e7eea3beefff5b61b0cb6abe29be2d2b9fc01e49c
	Jan 08 21:29:00 multinode-379549 kubelet[1586]: I0108 21:29:00.171373    1586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-hncds" podStartSLOduration=1.6443157940000002 podCreationTimestamp="2024-01-08 21:28:58 +0000 UTC" firstStartedPulling="2024-01-08 21:28:59.262281459 +0000 UTC m=+66.366182442" lastFinishedPulling="2024-01-08 21:28:59.789285855 +0000 UTC m=+66.893186837" observedRunningTime="2024-01-08 21:29:00.171060625 +0000 UTC m=+67.274961613" watchObservedRunningTime="2024-01-08 21:29:00.171320189 +0000 UTC m=+67.275221179"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-379549 -n multinode-379549
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-379549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.2277863355.exe start -p running-upgrade-691168 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.2277863355.exe start -p running-upgrade-691168 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.744160517s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-691168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-691168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.223359326s)

                                                
                                                
-- stdout --
	* [running-upgrade-691168] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-691168 in cluster running-upgrade-691168
	* Pulling base image v0.0.42-1703790982-17866 ...
	* Updating the running docker "running-upgrade-691168" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:40:35.955813  323391 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:40:35.956032  323391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:40:35.956039  323391 out.go:309] Setting ErrFile to fd 2...
	I0108 21:40:35.956047  323391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:40:35.956379  323391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:40:35.957131  323391 out.go:303] Setting JSON to false
	I0108 21:40:35.959061  323391 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":15788,"bootTime":1704734248,"procs":607,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:40:35.959150  323391 start.go:138] virtualization: kvm guest
	I0108 21:40:35.961783  323391 out.go:177] * [running-upgrade-691168] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:40:35.963887  323391 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:40:35.963883  323391 notify.go:220] Checking for updates...
	I0108 21:40:36.015045  323391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:40:36.016716  323391 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:40:36.018091  323391 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:40:36.019401  323391 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:40:36.020694  323391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:40:36.022586  323391 config.go:182] Loaded profile config "running-upgrade-691168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 21:40:36.022629  323391 start_flags.go:703] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 21:40:36.024702  323391 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 21:40:36.025940  323391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:40:36.052969  323391 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:40:36.053097  323391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:40:36.109499  323391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:true NGoroutines:79 SystemTime:2024-01-08 21:40:36.101144312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:40:36.109594  323391 docker.go:295] overlay module found
	I0108 21:40:36.111484  323391 out.go:177] * Using the docker driver based on existing profile
	I0108 21:40:36.112747  323391 start.go:298] selected driver: docker
	I0108 21:40:36.112757  323391 start.go:902] validating driver "docker" against &{Name:running-upgrade-691168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-691168 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:40:36.112833  323391 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:40:36.113690  323391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:40:36.203526  323391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:true NGoroutines:79 SystemTime:2024-01-08 21:40:36.190974999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:40:36.203867  323391 cni.go:84] Creating CNI manager for ""
	I0108 21:40:36.203895  323391 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 21:40:36.203904  323391 start_flags.go:321] config:
	{Name:running-upgrade-691168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-691168 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs
:}
	I0108 21:40:36.205845  323391 out.go:177] * Starting control plane node running-upgrade-691168 in cluster running-upgrade-691168
	I0108 21:40:36.207227  323391 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 21:40:36.208685  323391 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 21:40:36.210001  323391 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0108 21:40:36.210029  323391 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 21:40:36.233198  323391 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon, skipping pull
	I0108 21:40:36.233225  323391 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in daemon, skipping load
	W0108 21:40:36.238202  323391 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 21:40:36.238376  323391 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/running-upgrade-691168/config.json ...
	I0108 21:40:36.238455  323391 cache.go:107] acquiring lock: {Name:mk4ec20f4b242bc83b9cd28ea2ad2647131ee50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.238563  323391 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 21:40:36.238579  323391 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.487µs
	I0108 21:40:36.238590  323391 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 21:40:36.238615  323391 cache.go:107] acquiring lock: {Name:mkf5e32a24b5b3536d702124be886bae6921a2a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.238631  323391 cache.go:194] Successfully downloaded all kic artifacts
	I0108 21:40:36.238660  323391 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0108 21:40:36.238662  323391 start.go:365] acquiring machines lock for running-upgrade-691168: {Name:mk70902fb51afa50931c4ffdbe01940784ebdef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.238670  323391 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 63.408µs
	I0108 21:40:36.238682  323391 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0108 21:40:36.238707  323391 cache.go:107] acquiring lock: {Name:mkcd633e000f47f3fd1822ca585d2f3207f54971 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.238739  323391 start.go:369] acquired machines lock for "running-upgrade-691168" in 61.553µs
	I0108 21:40:36.238749  323391 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0108 21:40:36.238754  323391 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:40:36.238757  323391 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 62.968µs
	I0108 21:40:36.238760  323391 fix.go:54] fixHost starting: m01
	I0108 21:40:36.238766  323391 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0108 21:40:36.238779  323391 cache.go:107] acquiring lock: {Name:mk16f3f2ecaabb0c0beed78dfa95b5b8d6ffd9c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.238816  323391 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0108 21:40:36.238826  323391 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 49.429µs
	I0108 21:40:36.238837  323391 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0108 21:40:36.238850  323391 cache.go:107] acquiring lock: {Name:mk96b55f4eb12c37cd2eb5682474ab0e8371ffe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.238888  323391 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0108 21:40:36.238896  323391 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 46.818µs
	I0108 21:40:36.238913  323391 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0108 21:40:36.238926  323391 cache.go:107] acquiring lock: {Name:mkb8934b056cd69145c4415c84f2a3265b2dcb82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.238960  323391 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0108 21:40:36.238969  323391 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 44.897µs
	I0108 21:40:36.238977  323391 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0108 21:40:36.238989  323391 cache.go:107] acquiring lock: {Name:mkd8c36746ea03b2b506b5ad936098acd6c4c683 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.238995  323391 cache.go:107] acquiring lock: {Name:mk3224f02004892b88a4655dcba9b94092416792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:40:36.239046  323391 cli_runner.go:164] Run: docker container inspect running-upgrade-691168 --format={{.State.Status}}
	I0108 21:40:36.239047  323391 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 21:40:36.239171  323391 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 161.779µs
	I0108 21:40:36.239224  323391 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 21:40:36.239049  323391 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0108 21:40:36.239242  323391 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 254.953µs
	I0108 21:40:36.239255  323391 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0108 21:40:36.239263  323391 cache.go:87] Successfully saved all images to host disk.
	I0108 21:40:36.261605  323391 fix.go:102] recreateIfNeeded on running-upgrade-691168: state=Running err=<nil>
	W0108 21:40:36.261639  323391 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:40:36.263615  323391 out.go:177] * Updating the running docker "running-upgrade-691168" container ...
	I0108 21:40:36.265016  323391 machine.go:88] provisioning docker machine ...
	I0108 21:40:36.265052  323391 ubuntu.go:169] provisioning hostname "running-upgrade-691168"
	I0108 21:40:36.265123  323391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-691168
	I0108 21:40:36.283263  323391 main.go:141] libmachine: Using SSH client type: native
	I0108 21:40:36.283609  323391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0108 21:40:36.283623  323391 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-691168 && echo "running-upgrade-691168" | sudo tee /etc/hostname
	I0108 21:40:36.405089  323391 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-691168
	
	I0108 21:40:36.405179  323391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-691168
	I0108 21:40:36.423377  323391 main.go:141] libmachine: Using SSH client type: native
	I0108 21:40:36.423867  323391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0108 21:40:36.423900  323391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-691168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-691168/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-691168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:40:36.541892  323391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:40:36.541928  323391 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-150013/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-150013/.minikube}
	I0108 21:40:36.541954  323391 ubuntu.go:177] setting up certificates
	I0108 21:40:36.541971  323391 provision.go:83] configureAuth start
	I0108 21:40:36.542034  323391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-691168
	I0108 21:40:36.561501  323391 provision.go:138] copyHostCerts
	I0108 21:40:36.561564  323391 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem, removing ...
	I0108 21:40:36.561576  323391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem
	I0108 21:40:36.561643  323391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem (1078 bytes)
	I0108 21:40:36.561777  323391 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem, removing ...
	I0108 21:40:36.561791  323391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem
	I0108 21:40:36.561825  323391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem (1123 bytes)
	I0108 21:40:36.561916  323391 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem, removing ...
	I0108 21:40:36.561928  323391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem
	I0108 21:40:36.561961  323391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem (1675 bytes)
	I0108 21:40:36.562022  323391 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-691168 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-691168]
	I0108 21:40:36.652799  323391 provision.go:172] copyRemoteCerts
	I0108 21:40:36.652879  323391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:40:36.652928  323391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-691168
	I0108 21:40:36.672523  323391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/running-upgrade-691168/id_rsa Username:docker}
	I0108 21:40:36.752950  323391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:40:36.771703  323391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:40:36.790836  323391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:40:36.808291  323391 provision.go:86] duration metric: configureAuth took 266.30504ms
	I0108 21:40:36.808318  323391 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:40:36.808492  323391 config.go:182] Loaded profile config "running-upgrade-691168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 21:40:36.808592  323391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-691168
	I0108 21:40:36.826720  323391 main.go:141] libmachine: Using SSH client type: native
	I0108 21:40:36.827067  323391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0108 21:40:36.827087  323391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:40:37.236299  323391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:40:37.236331  323391 machine.go:91] provisioned docker machine in 971.298455ms
	I0108 21:40:37.236341  323391 start.go:300] post-start starting for "running-upgrade-691168" (driver="docker")
	I0108 21:40:37.236355  323391 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:40:37.236428  323391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:40:37.236479  323391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-691168
	I0108 21:40:37.261411  323391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/running-upgrade-691168/id_rsa Username:docker}
	I0108 21:40:37.348754  323391 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:40:37.351438  323391 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:40:37.351466  323391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:40:37.351474  323391 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:40:37.351482  323391 info.go:137] Remote host: Ubuntu 19.10
	I0108 21:40:37.351491  323391 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/addons for local assets ...
	I0108 21:40:37.351539  323391 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/files for local assets ...
	I0108 21:40:37.351606  323391 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> 1566482.pem in /etc/ssl/certs
	I0108 21:40:37.351685  323391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:40:37.358400  323391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem --> /etc/ssl/certs/1566482.pem (1708 bytes)
	I0108 21:40:37.375291  323391 start.go:303] post-start completed in 138.93362ms
	I0108 21:40:37.375375  323391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:40:37.375419  323391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-691168
	I0108 21:40:37.395793  323391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/running-upgrade-691168/id_rsa Username:docker}
	I0108 21:40:37.474160  323391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:40:37.478114  323391 fix.go:56] fixHost completed within 1.239347004s
	I0108 21:40:37.478142  323391 start.go:83] releasing machines lock for "running-upgrade-691168", held for 1.239390416s
	I0108 21:40:37.478212  323391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-691168
	I0108 21:40:37.503821  323391 ssh_runner.go:195] Run: cat /version.json
	I0108 21:40:37.503902  323391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-691168
	I0108 21:40:37.503909  323391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:40:37.503954  323391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-691168
	I0108 21:40:37.534271  323391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/running-upgrade-691168/id_rsa Username:docker}
	I0108 21:40:37.538898  323391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/running-upgrade-691168/id_rsa Username:docker}
	W0108 21:40:37.612622  323391 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 21:40:37.612732  323391 ssh_runner.go:195] Run: systemctl --version
	I0108 21:40:37.659113  323391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:40:37.713363  323391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:40:37.717767  323391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:40:37.732806  323391 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 21:40:37.732892  323391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:40:37.754517  323391 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:40:37.754545  323391 start.go:475] detecting cgroup driver to use...
	I0108 21:40:37.754580  323391 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 21:40:37.754633  323391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:40:37.777641  323391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:40:37.786676  323391 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:40:37.786731  323391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:40:37.795351  323391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:40:37.803667  323391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 21:40:37.813183  323391 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 21:40:37.813242  323391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:40:37.890668  323391 docker.go:219] disabling docker service ...
	I0108 21:40:37.890734  323391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:40:37.899983  323391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:40:37.909177  323391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:40:37.980412  323391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:40:38.068588  323391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:40:38.078366  323391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:40:38.093621  323391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 21:40:38.093685  323391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:40:38.103150  323391 out.go:177] 
	W0108 21:40:38.104368  323391 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 21:40:38.104384  323391 out.go:239] * 
	* 
	W0108 21:40:38.105254  323391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:40:38.107580  323391 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-691168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 21:40:38.125059939 +0000 UTC m=+1897.776942433
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-691168
helpers_test.go:235: (dbg) docker inspect running-upgrade-691168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "90214767329c3b1ff0e0a399b92c5566a0a482a34e6959b45c8a8e5e4bd7839c",
	        "Created": "2024-01-08T21:39:30.582845105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307628,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T21:39:31.155253494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/90214767329c3b1ff0e0a399b92c5566a0a482a34e6959b45c8a8e5e4bd7839c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/90214767329c3b1ff0e0a399b92c5566a0a482a34e6959b45c8a8e5e4bd7839c/hostname",
	        "HostsPath": "/var/lib/docker/containers/90214767329c3b1ff0e0a399b92c5566a0a482a34e6959b45c8a8e5e4bd7839c/hosts",
	        "LogPath": "/var/lib/docker/containers/90214767329c3b1ff0e0a399b92c5566a0a482a34e6959b45c8a8e5e4bd7839c/90214767329c3b1ff0e0a399b92c5566a0a482a34e6959b45c8a8e5e4bd7839c-json.log",
	        "Name": "/running-upgrade-691168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-691168:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/268b253bd6fc413fa3a7c5fc2b00144a633442e8dbb3556ca83b59fff660c3a4-init/diff:/var/lib/docker/overlay2/2ecaee9a3cc532bed77281fbcb71f49e0ce4d28607641dfb30ab24c44c72ca4c/diff:/var/lib/docker/overlay2/8704ed4dd7f79b44d8a7c5a052b0c69e96cbbfa53e6eb41c1e247d227b301cfd/diff:/var/lib/docker/overlay2/63d5fe91a13717bee7ffc3ce96bada8b0cbfcb4f35245da8d6b4540c3485cda5/diff:/var/lib/docker/overlay2/2cf5333c832905c154477687b0aec8cdbb074ad3539fd790a4f34244604f8353/diff:/var/lib/docker/overlay2/bf211b1296b15fd0ff393bfc22158fcc187008d1f126bd23b6a936ca104d8553/diff:/var/lib/docker/overlay2/c232052d5c80164bcdf071e5d35f18fcaf5034d1b4d6688bc76c83449853ce6b/diff:/var/lib/docker/overlay2/8975da083c45127295b26feee28842c4e5a52dabedd30a8f166ba4525da36757/diff:/var/lib/docker/overlay2/e0cbebb22e3317bcb080508410ad7082453ba989cef28ba5bd8541f6b06cb716/diff:/var/lib/docker/overlay2/270dcf192c5684fe3b95da2431568219b786907a793aa4c48b7219a70237f555/diff:/var/lib/docker/overlay2/c7ff04
699ea8195d7d74e077fa18df3add2323acc734f5e1e20635e4a179ea3b/diff:/var/lib/docker/overlay2/e969f6a3a14f8a083ee7611d7f54c907de09d2bcf5b9b887de44d73303f5f5d8/diff:/var/lib/docker/overlay2/6ca11da93602d50d0cb961ca6f56cd16899f433a9d714e3823b3c8717245c211/diff:/var/lib/docker/overlay2/9d2737d1330fffb33a6288566f8b02294b8fbf37999383cb4bd1d233c53e99c0/diff:/var/lib/docker/overlay2/8dfebf12703c3c9b9076090ca27eae537eb783d77f7bb905e0292ab0ff223410/diff:/var/lib/docker/overlay2/fc3b541fea32c39d3b09d165068e6ab3c87db89a3d809030c95865db14c11dcc/diff:/var/lib/docker/overlay2/0a2ae5b4d7c1fbc92970874a85b8f728b0cffb75ed4eca9f70a2ec129cbd7438/diff:/var/lib/docker/overlay2/dec464680e016f8fa9283651c276d3d813627b7a8000cbcefb2ada75dba1791d/diff:/var/lib/docker/overlay2/95b4640d1d6aba304415ed85ffd977d99ebd1a60bd0d809fcaafd8d3144ce81a/diff:/var/lib/docker/overlay2/a69533d37f1414aa63770c14370d707015434fc666089014902e321a3a071977/diff:/var/lib/docker/overlay2/377ca787ca8c27fb7e8ed9eec91a8032bbcc3fd35020e15d303de46a058ba9d6/diff:/var/lib/d
ocker/overlay2/4f87f430443be3fd9f78cdbf77fd43d4dc388cb60af34e0cc29dccaec6cad09b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/268b253bd6fc413fa3a7c5fc2b00144a633442e8dbb3556ca83b59fff660c3a4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/268b253bd6fc413fa3a7c5fc2b00144a633442e8dbb3556ca83b59fff660c3a4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/268b253bd6fc413fa3a7c5fc2b00144a633442e8dbb3556ca83b59fff660c3a4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-691168",
	                "Source": "/var/lib/docker/volumes/running-upgrade-691168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-691168",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-691168",
	                "name.minikube.sigs.k8s.io": "running-upgrade-691168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fda900066943718680d8c5bfaa8ff82fa5ab90b5f33f20cb4cdf124e2ae3bb6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32947"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8fda90006694",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "e5f427854f63c697829ba8467a3afc14756ebc04a3c7839376b309605fac32be",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "6e600ef690c0df893745381e06e3f9de91686389fe6da1c46026237c5e825c12",
	                    "EndpointID": "e5f427854f63c697829ba8467a3afc14756ebc04a3c7839376b309605fac32be",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-691168 -n running-upgrade-691168
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-691168 -n running-upgrade-691168: exit status 4 (316.168832ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:40:38.426195  323977 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-691168" does not appear in /home/jenkins/minikube-integration/17866-150013/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-691168" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-691168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-691168
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-691168: (1.931236493s)
--- FAIL: TestRunningBinaryUpgrade (70.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (90.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.2465313143.exe start -p stopped-upgrade-304512 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.2465313143.exe start -p stopped-upgrade-304512 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m21.537079992s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.2465313143.exe -p stopped-upgrade-304512 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.2465313143.exe -p stopped-upgrade-304512 stop: (2.519743906s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-304512 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-304512 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.420950307s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-304512] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-304512 in cluster stopped-upgrade-304512
	* Pulling base image v0.0.42-1703790982-17866 ...
	* Restarting existing docker container for "stopped-upgrade-304512" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:39:33.351478  308343 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:39:33.351609  308343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:39:33.351620  308343 out.go:309] Setting ErrFile to fd 2...
	I0108 21:39:33.351625  308343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:39:33.351899  308343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:39:33.352607  308343 out.go:303] Setting JSON to false
	I0108 21:39:33.354465  308343 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":15725,"bootTime":1704734248,"procs":575,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:39:33.354550  308343 start.go:138] virtualization: kvm guest
	I0108 21:39:33.357305  308343 out.go:177] * [stopped-upgrade-304512] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:39:33.359633  308343 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:39:33.361213  308343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:39:33.359652  308343 notify.go:220] Checking for updates...
	I0108 21:39:33.363704  308343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:39:33.365243  308343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:39:33.366932  308343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:39:33.368384  308343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:39:33.370706  308343 config.go:182] Loaded profile config "stopped-upgrade-304512": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 21:39:33.370732  308343 start_flags.go:703] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 21:39:33.372828  308343 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 21:39:33.374469  308343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:39:33.400471  308343 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:39:33.400576  308343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:39:33.478853  308343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2024-01-08 21:39:33.466271968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:39:33.478998  308343 docker.go:295] overlay module found
	I0108 21:39:33.480829  308343 out.go:177] * Using the docker driver based on existing profile
	I0108 21:39:33.482247  308343 start.go:298] selected driver: docker
	I0108 21:39:33.482263  308343 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-304512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-304512 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:39:33.482369  308343 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:39:33.483484  308343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:39:33.560065  308343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2024-01-08 21:39:33.550170493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:39:33.560464  308343 cni.go:84] Creating CNI manager for ""
	I0108 21:39:33.560490  308343 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 21:39:33.560500  308343 start_flags.go:321] config:
	{Name:stopped-upgrade-304512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-304512 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs
:}
	I0108 21:39:33.563393  308343 out.go:177] * Starting control plane node stopped-upgrade-304512 in cluster stopped-upgrade-304512
	I0108 21:39:33.565167  308343 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 21:39:33.566746  308343 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 21:39:33.568286  308343 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0108 21:39:33.568333  308343 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 21:39:33.588280  308343 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon, skipping pull
	I0108 21:39:33.588315  308343 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in daemon, skipping load
	W0108 21:39:33.625771  308343 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 21:39:33.625982  308343 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/stopped-upgrade-304512/config.json ...
	I0108 21:39:33.626067  308343 cache.go:107] acquiring lock: {Name:mk16f3f2ecaabb0c0beed78dfa95b5b8d6ffd9c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626108  308343 cache.go:107] acquiring lock: {Name:mk3224f02004892b88a4655dcba9b94092416792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626097  308343 cache.go:107] acquiring lock: {Name:mkf5e32a24b5b3536d702124be886bae6921a2a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626145  308343 cache.go:107] acquiring lock: {Name:mk96b55f4eb12c37cd2eb5682474ab0e8371ffe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626173  308343 cache.go:107] acquiring lock: {Name:mkd8c36746ea03b2b506b5ad936098acd6c4c683 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626143  308343 cache.go:107] acquiring lock: {Name:mkb8934b056cd69145c4415c84f2a3265b2dcb82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626225  308343 cache.go:194] Successfully downloaded all kic artifacts
	I0108 21:39:33.626268  308343 start.go:365] acquiring machines lock for stopped-upgrade-304512: {Name:mkaa62912d9f4c31ce43028c357845222cc1f82e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626275  308343 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0108 21:39:33.626278  308343 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:39:33.626298  308343 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0108 21:39:33.626325  308343 start.go:369] acquired machines lock for "stopped-upgrade-304512" in 41.426µs
	I0108 21:39:33.626073  308343 cache.go:107] acquiring lock: {Name:mk4ec20f4b242bc83b9cd28ea2ad2647131ee50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626344  308343 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:39:33.626354  308343 fix.go:54] fixHost starting: m01
	I0108 21:39:33.626376  308343 cache.go:115] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 21:39:33.626369  308343 cache.go:107] acquiring lock: {Name:mkcd633e000f47f3fd1822ca585d2f3207f54971 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:39:33.626396  308343 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 332.456µs
	I0108 21:39:33.626414  308343 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 21:39:33.626444  308343 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0108 21:39:33.626465  308343 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0108 21:39:33.626326  308343 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 21:39:33.626447  308343 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 21:39:33.626662  308343 cli_runner.go:164] Run: docker container inspect stopped-upgrade-304512 --format={{.State.Status}}
	I0108 21:39:33.627812  308343 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:39:33.627821  308343 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 21:39:33.627826  308343 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0108 21:39:33.627835  308343 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0108 21:39:33.627810  308343 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0108 21:39:33.627860  308343 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0108 21:39:33.627869  308343 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 21:39:33.648385  308343 fix.go:102] recreateIfNeeded on stopped-upgrade-304512: state=Stopped err=<nil>
	W0108 21:39:33.648426  308343 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:39:33.652203  308343 out.go:177] * Restarting existing docker container for "stopped-upgrade-304512" ...
	I0108 21:39:33.654333  308343 cli_runner.go:164] Run: docker start stopped-upgrade-304512
	I0108 21:39:33.848950  308343 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 21:39:33.868816  308343 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0108 21:39:33.898731  308343 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0108 21:39:33.910119  308343 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0108 21:39:33.938223  308343 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0108 21:39:33.950779  308343 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0108 21:39:33.953359  308343 cli_runner.go:164] Run: docker container inspect stopped-upgrade-304512 --format={{.State.Status}}
	I0108 21:39:33.960625  308343 cache.go:157] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0108 21:39:33.960654  308343 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 334.528708ms
	I0108 21:39:33.960673  308343 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0108 21:39:33.970731  308343 kic.go:430] container "stopped-upgrade-304512" state is running.
	I0108 21:39:33.985665  308343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-304512
	I0108 21:39:33.989140  308343 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0108 21:39:34.010833  308343 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/stopped-upgrade-304512/config.json ...
	I0108 21:39:34.116750  308343 machine.go:88] provisioning docker machine ...
	I0108 21:39:34.116791  308343 ubuntu.go:169] provisioning hostname "stopped-upgrade-304512"
	I0108 21:39:34.116858  308343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-304512
	I0108 21:39:34.137167  308343 main.go:141] libmachine: Using SSH client type: native
	I0108 21:39:34.137742  308343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32950 <nil> <nil>}
	I0108 21:39:34.137768  308343 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-304512 && echo "stopped-upgrade-304512" | sudo tee /etc/hostname
	I0108 21:39:34.138490  308343 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34510->127.0.0.1:32950: read: connection reset by peer
	I0108 21:39:34.500053  308343 cache.go:157] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0108 21:39:34.500084  308343 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 873.911708ms
	I0108 21:39:34.500101  308343 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0108 21:39:34.986734  308343 cache.go:157] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0108 21:39:34.986770  308343 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.360713989s
	I0108 21:39:34.986791  308343 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0108 21:39:35.316595  308343 cache.go:157] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0108 21:39:35.316628  308343 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.690259342s
	I0108 21:39:35.316646  308343 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0108 21:39:35.616394  308343 cache.go:157] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0108 21:39:35.616428  308343 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.99033709s
	I0108 21:39:35.616445  308343 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0108 21:39:35.640474  308343 cache.go:157] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 21:39:35.640508  308343 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.014399453s
	I0108 21:39:35.640525  308343 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 21:39:35.977909  308343 cache.go:157] /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0108 21:39:35.977996  308343 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 2.351865514s
	I0108 21:39:35.978025  308343 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0108 21:39:35.978069  308343 cache.go:87] Successfully saved all images to host disk.
	I0108 21:39:37.268378  308343 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-304512
	
	I0108 21:39:37.268463  308343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-304512
	I0108 21:39:37.287932  308343 main.go:141] libmachine: Using SSH client type: native
	I0108 21:39:37.288458  308343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32950 <nil> <nil>}
	I0108 21:39:37.288498  308343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-304512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-304512/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-304512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:39:37.417000  308343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:39:37.417042  308343 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-150013/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-150013/.minikube}
	I0108 21:39:37.417088  308343 ubuntu.go:177] setting up certificates
	I0108 21:39:37.417104  308343 provision.go:83] configureAuth start
	I0108 21:39:37.417172  308343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-304512
	I0108 21:39:37.440826  308343 provision.go:138] copyHostCerts
	I0108 21:39:37.440910  308343 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem, removing ...
	I0108 21:39:37.440926  308343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem
	I0108 21:39:37.441000  308343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/ca.pem (1078 bytes)
	I0108 21:39:37.441118  308343 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem, removing ...
	I0108 21:39:37.441125  308343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem
	I0108 21:39:37.441157  308343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/cert.pem (1123 bytes)
	I0108 21:39:37.441233  308343 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem, removing ...
	I0108 21:39:37.441241  308343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem
	I0108 21:39:37.441263  308343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-150013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-150013/.minikube/key.pem (1675 bytes)
	I0108 21:39:37.441313  308343 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-304512 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-304512]
	I0108 21:39:37.649960  308343 provision.go:172] copyRemoteCerts
	I0108 21:39:37.650026  308343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:39:37.650071  308343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-304512
	I0108 21:39:37.680592  308343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32950 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/stopped-upgrade-304512/id_rsa Username:docker}
	I0108 21:39:37.770287  308343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:39:37.791540  308343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:39:37.831529  308343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:39:37.857832  308343 provision.go:86] duration metric: configureAuth took 440.713291ms
	I0108 21:39:37.857868  308343 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:39:37.858070  308343 config.go:182] Loaded profile config "stopped-upgrade-304512": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 21:39:37.858184  308343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-304512
	I0108 21:39:37.883552  308343 main.go:141] libmachine: Using SSH client type: native
	I0108 21:39:37.884032  308343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32950 <nil> <nil>}
	I0108 21:39:37.884070  308343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:39:38.626040  308343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:39:38.626067  308343 machine.go:91] provisioned docker machine in 4.509290403s
	I0108 21:39:38.626078  308343 start.go:300] post-start starting for "stopped-upgrade-304512" (driver="docker")
	I0108 21:39:38.626091  308343 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:39:38.626149  308343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:39:38.626195  308343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-304512
	I0108 21:39:38.703501  308343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32950 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/stopped-upgrade-304512/id_rsa Username:docker}
	I0108 21:39:38.802715  308343 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:39:38.814631  308343 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:39:38.814662  308343 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:39:38.814674  308343 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:39:38.814683  308343 info.go:137] Remote host: Ubuntu 19.10
	I0108 21:39:38.814699  308343 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/addons for local assets ...
	I0108 21:39:38.814774  308343 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-150013/.minikube/files for local assets ...
	I0108 21:39:38.814891  308343 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem -> 1566482.pem in /etc/ssl/certs
	I0108 21:39:38.815032  308343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:39:38.825933  308343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/ssl/certs/1566482.pem --> /etc/ssl/certs/1566482.pem (1708 bytes)
	I0108 21:39:38.853340  308343 start.go:303] post-start completed in 227.247341ms
	I0108 21:39:38.853422  308343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:39:38.853482  308343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-304512
	I0108 21:39:38.893996  308343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32950 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/stopped-upgrade-304512/id_rsa Username:docker}
	I0108 21:39:38.979348  308343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:39:38.983791  308343 fix.go:56] fixHost completed within 5.357431137s
	I0108 21:39:38.983813  308343 start.go:83] releasing machines lock for "stopped-upgrade-304512", held for 5.35747539s
	I0108 21:39:38.983865  308343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-304512
	I0108 21:39:39.019147  308343 ssh_runner.go:195] Run: cat /version.json
	I0108 21:39:39.019215  308343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-304512
	I0108 21:39:39.019235  308343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:39:39.019299  308343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-304512
	I0108 21:39:39.041323  308343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32950 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/stopped-upgrade-304512/id_rsa Username:docker}
	I0108 21:39:39.052879  308343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32950 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/stopped-upgrade-304512/id_rsa Username:docker}
	W0108 21:39:39.125616  308343 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 21:39:39.125730  308343 ssh_runner.go:195] Run: systemctl --version
	I0108 21:39:39.260202  308343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:39:39.315116  308343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:39:39.321057  308343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:39:39.342607  308343 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 21:39:39.342681  308343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:39:39.369854  308343 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:39:39.369881  308343 start.go:475] detecting cgroup driver to use...
	I0108 21:39:39.369917  308343 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 21:39:39.369969  308343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:39:39.395827  308343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:39:39.407597  308343 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:39:39.407668  308343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:39:39.418091  308343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:39:39.428224  308343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 21:39:39.439186  308343 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 21:39:39.439237  308343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:39:39.514082  308343 docker.go:219] disabling docker service ...
	I0108 21:39:39.514156  308343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:39:39.522929  308343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:39:39.531880  308343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:39:39.594925  308343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:39:39.662057  308343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:39:39.670506  308343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:39:39.682428  308343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 21:39:39.682489  308343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:39:39.692424  308343 out.go:177] 
	W0108 21:39:39.693786  308343 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 21:39:39.693808  308343 out.go:239] * 
	* 
	W0108 21:39:39.694629  308343 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:39:39.695938  308343 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-304512 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (90.48s)

                                                
                                    

Test pass (284/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.07
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 5.39
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.2/json-events 5.44
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.2
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
25 TestDownloadOnlyKic 1.28
26 TestBinaryMirror 0.72
27 TestOffline 80.7
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 138.96
34 TestAddons/parallel/Registry 13.72
36 TestAddons/parallel/InspektorGadget 11.97
37 TestAddons/parallel/MetricsServer 5.64
38 TestAddons/parallel/HelmTiller 10.59
40 TestAddons/parallel/CSI 70.61
41 TestAddons/parallel/Headlamp 13.24
42 TestAddons/parallel/CloudSpanner 5.51
43 TestAddons/parallel/LocalPath 52.89
44 TestAddons/parallel/NvidiaDevicePlugin 5.47
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.11
49 TestAddons/StoppedEnableDisable 12.21
50 TestCertOptions 28.16
51 TestCertExpiration 224.93
53 TestForceSystemdFlag 25.72
54 TestForceSystemdEnv 28.21
56 TestKVMDriverInstallOrUpdate 4.76
60 TestErrorSpam/setup 21.06
61 TestErrorSpam/start 0.63
62 TestErrorSpam/status 0.91
63 TestErrorSpam/pause 1.54
64 TestErrorSpam/unpause 1.54
65 TestErrorSpam/stop 1.4
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 38.49
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 29.54
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.07
76 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
77 TestFunctional/serial/CacheCmd/cache/add_local 1.17
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
82 TestFunctional/serial/CacheCmd/cache/delete 0.12
83 TestFunctional/serial/MinikubeKubectlCmd 0.12
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
85 TestFunctional/serial/ExtraConfig 32.38
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.33
88 TestFunctional/serial/LogsFileCmd 1.37
89 TestFunctional/serial/InvalidService 3.86
91 TestFunctional/parallel/ConfigCmd 0.57
92 TestFunctional/parallel/DashboardCmd 10.32
93 TestFunctional/parallel/DryRun 0.71
94 TestFunctional/parallel/InternationalLanguage 0.34
95 TestFunctional/parallel/StatusCmd 1.03
99 TestFunctional/parallel/ServiceCmdConnect 9.62
100 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/PersistentVolumeClaim 39.48
103 TestFunctional/parallel/SSHCmd 0.67
104 TestFunctional/parallel/CpCmd 2.35
105 TestFunctional/parallel/MySQL 22.96
106 TestFunctional/parallel/FileSync 0.35
107 TestFunctional/parallel/CertSync 1.81
111 TestFunctional/parallel/NodeLabels 0.1
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
115 TestFunctional/parallel/License 0.25
116 TestFunctional/parallel/Version/short 0.07
117 TestFunctional/parallel/Version/components 0.53
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 19.36
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
126 TestFunctional/parallel/ServiceCmd/DeployApp 8.19
127 TestFunctional/parallel/ServiceCmd/List 1.04
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.91
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
137 TestFunctional/parallel/ServiceCmd/Format 0.58
138 TestFunctional/parallel/ProfileCmd/profile_list 0.36
139 TestFunctional/parallel/ServiceCmd/URL 0.56
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
141 TestFunctional/parallel/MountCmd/any-port 6.98
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
146 TestFunctional/parallel/ImageCommands/ImageBuild 1.84
147 TestFunctional/parallel/ImageCommands/Setup 1.02
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 9.64
149 TestFunctional/parallel/MountCmd/specific-port 2.42
150 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.46
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.33
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.16
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.8
157 TestFunctional/delete_addon-resizer_images 0.07
158 TestFunctional/delete_my-image_image 0.01
159 TestFunctional/delete_minikube_cached_images 0.01
163 TestIngressAddonLegacy/StartLegacyK8sCluster 60.23
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.72
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.55
170 TestJSONOutput/start/Command 70.21
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.66
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.6
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.77
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.24
195 TestKicCustomNetwork/create_custom_network 28.95
196 TestKicCustomNetwork/use_default_bridge_network 24.36
197 TestKicExistingNetwork 27.1
198 TestKicCustomSubnet 26.92
199 TestKicStaticIP 27.12
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 49.05
204 TestMountStart/serial/StartWithMountFirst 8.33
205 TestMountStart/serial/VerifyMountFirst 0.26
206 TestMountStart/serial/StartWithMountSecond 5.44
207 TestMountStart/serial/VerifyMountSecond 0.26
208 TestMountStart/serial/DeleteFirst 1.63
209 TestMountStart/serial/VerifyMountPostDelete 0.26
210 TestMountStart/serial/Stop 1.22
211 TestMountStart/serial/RestartStopped 7.27
212 TestMountStart/serial/VerifyMountPostStop 0.26
215 TestMultiNode/serial/FreshStart2Nodes 87.75
216 TestMultiNode/serial/DeployApp2Nodes 3.03
218 TestMultiNode/serial/AddNode 19.67
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.28
221 TestMultiNode/serial/CopyFile 9.48
222 TestMultiNode/serial/StopNode 2.17
223 TestMultiNode/serial/StartAfterStop 11.53
224 TestMultiNode/serial/RestartKeepsNodes 113.76
225 TestMultiNode/serial/DeleteNode 4.71
226 TestMultiNode/serial/StopMultiNode 23.84
227 TestMultiNode/serial/RestartMultiNode 81.34
228 TestMultiNode/serial/ValidateNameConflict 23.41
233 TestPreload 137.22
235 TestScheduledStopUnix 100.46
238 TestInsufficientStorage 10.36
241 TestKubernetesUpgrade 351.61
242 TestMissingContainerUpgrade 156.86
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestStoppedBinaryUpgrade/Setup 0.46
246 TestNoKubernetes/serial/StartWithK8s 37.02
248 TestNoKubernetes/serial/StartWithStopK8s 10.13
249 TestNoKubernetes/serial/Start 6.93
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
251 TestNoKubernetes/serial/ProfileList 1.44
252 TestNoKubernetes/serial/Stop 1.25
253 TestNoKubernetes/serial/StartNoArgs 9.51
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.56
264 TestPause/serial/Start 69.16
272 TestNetworkPlugins/group/false 3.79
273 TestPause/serial/SecondStartNoReconfiguration 35.5
277 TestPause/serial/Pause 0.85
278 TestPause/serial/VerifyStatus 0.34
279 TestPause/serial/Unpause 0.64
280 TestPause/serial/PauseAgain 0.81
281 TestPause/serial/DeletePaused 2.68
282 TestPause/serial/VerifyDeletedResources 16.94
284 TestStartStop/group/old-k8s-version/serial/FirstStart 119.51
286 TestStartStop/group/embed-certs/serial/FirstStart 45.67
287 TestStartStop/group/embed-certs/serial/DeployApp 8.27
288 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
289 TestStartStop/group/embed-certs/serial/Stop 11.87
290 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
291 TestStartStop/group/embed-certs/serial/SecondStart 337.84
292 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.8
294 TestStartStop/group/old-k8s-version/serial/Stop 11.92
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
296 TestStartStop/group/old-k8s-version/serial/SecondStart 438.89
298 TestStartStop/group/no-preload/serial/FirstStart 55.94
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.87
301 TestStartStop/group/no-preload/serial/DeployApp 7.3
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
303 TestStartStop/group/no-preload/serial/Stop 11.91
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
305 TestStartStop/group/no-preload/serial/SecondStart 588.51
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.28
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.92
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 341.6
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
314 TestStartStop/group/embed-certs/serial/Pause 2.71
316 TestStartStop/group/newest-cni/serial/FirstStart 37.08
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
319 TestStartStop/group/newest-cni/serial/Stop 1.22
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
321 TestStartStop/group/newest-cni/serial/SecondStart 25.88
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
325 TestStartStop/group/newest-cni/serial/Pause 2.61
326 TestNetworkPlugins/group/auto/Start 69.25
327 TestNetworkPlugins/group/auto/KubeletFlags 0.28
328 TestNetworkPlugins/group/auto/NetCatPod 10.19
329 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
331 TestNetworkPlugins/group/auto/DNS 0.16
332 TestNetworkPlugins/group/auto/Localhost 0.14
333 TestNetworkPlugins/group/auto/HairPin 0.17
334 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
335 TestStartStop/group/old-k8s-version/serial/Pause 3.58
336 TestNetworkPlugins/group/kindnet/Start 45.65
337 TestNetworkPlugins/group/flannel/Start 59.36
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
342 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.8
343 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
344 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
345 TestNetworkPlugins/group/enable-default-cni/Start 37.26
346 TestNetworkPlugins/group/kindnet/DNS 0.17
347 TestNetworkPlugins/group/kindnet/Localhost 0.15
348 TestNetworkPlugins/group/kindnet/HairPin 0.14
349 TestNetworkPlugins/group/flannel/ControllerPod 6.01
350 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
351 TestNetworkPlugins/group/flannel/NetCatPod 9.19
352 TestNetworkPlugins/group/bridge/Start 78.9
353 TestNetworkPlugins/group/flannel/DNS 0.15
354 TestNetworkPlugins/group/flannel/Localhost 0.16
355 TestNetworkPlugins/group/flannel/HairPin 0.14
356 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
357 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
358 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
360 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
361 TestNetworkPlugins/group/calico/Start 65.1
362 TestNetworkPlugins/group/custom-flannel/Start 55.62
363 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
364 TestNetworkPlugins/group/bridge/NetCatPod 10.18
365 TestNetworkPlugins/group/bridge/DNS 0.14
366 TestNetworkPlugins/group/bridge/Localhost 0.12
367 TestNetworkPlugins/group/bridge/HairPin 0.12
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.3
370 TestNetworkPlugins/group/calico/NetCatPod 11.24
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
373 TestNetworkPlugins/group/custom-flannel/DNS 0.17
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
376 TestNetworkPlugins/group/calico/DNS 0.19
377 TestNetworkPlugins/group/calico/Localhost 0.14
378 TestNetworkPlugins/group/calico/HairPin 0.14
379 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
380 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
382 TestStartStop/group/no-preload/serial/Pause 2.63
x
+
TestDownloadOnly/v1.16.0/json-events (5.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-423014 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-423014 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.071079454s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-423014
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-423014: exit status 85 (73.815918ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-423014 | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |          |
	|         | -p download-only-423014        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:09:00
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:09:00.456179  156660 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:09:00.456302  156660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:00.456311  156660 out.go:309] Setting ErrFile to fd 2...
	I0108 21:09:00.456316  156660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:00.456516  156660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	W0108 21:09:00.456635  156660 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-150013/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-150013/.minikube/config/config.json: no such file or directory
	I0108 21:09:00.457257  156660 out.go:303] Setting JSON to true
	I0108 21:09:00.458225  156660 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13892,"bootTime":1704734248,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:09:00.458299  156660 start.go:138] virtualization: kvm guest
	I0108 21:09:00.460853  156660 out.go:97] [download-only-423014] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:09:00.462396  156660 out.go:169] MINIKUBE_LOCATION=17866
	W0108 21:09:00.460954  156660 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 21:09:00.461030  156660 notify.go:220] Checking for updates...
	I0108 21:09:00.465324  156660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:09:00.466737  156660 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:09:00.468127  156660 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:09:00.469613  156660 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 21:09:00.472254  156660 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 21:09:00.472512  156660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:09:00.493936  156660 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:09:00.494024  156660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:00.843150  156660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-08 21:09:00.834376782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:00.843295  156660 docker.go:295] overlay module found
	I0108 21:09:00.845079  156660 out.go:97] Using the docker driver based on user configuration
	I0108 21:09:00.845112  156660 start.go:298] selected driver: docker
	I0108 21:09:00.845123  156660 start.go:902] validating driver "docker" against <nil>
	I0108 21:09:00.845248  156660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:00.894409  156660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-08 21:09:00.886327884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:00.894632  156660 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:09:00.895170  156660 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0108 21:09:00.895298  156660 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 21:09:00.897287  156660 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-423014"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-423014 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-423014 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.387263511s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-423014
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-423014: exit status 85 (73.585485ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-423014 | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |          |
	|         | -p download-only-423014        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-423014 | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |          |
	|         | -p download-only-423014        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:09:05
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:09:05.603247  156791 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:09:05.603479  156791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:05.603487  156791 out.go:309] Setting ErrFile to fd 2...
	I0108 21:09:05.603492  156791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:05.603670  156791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	W0108 21:09:05.603780  156791 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-150013/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-150013/.minikube/config/config.json: no such file or directory
	I0108 21:09:05.604183  156791 out.go:303] Setting JSON to true
	I0108 21:09:05.605046  156791 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13898,"bootTime":1704734248,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:09:05.605112  156791 start.go:138] virtualization: kvm guest
	I0108 21:09:05.607217  156791 out.go:97] [download-only-423014] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:09:05.608759  156791 out.go:169] MINIKUBE_LOCATION=17866
	I0108 21:09:05.607326  156791 notify.go:220] Checking for updates...
	I0108 21:09:05.611444  156791 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:09:05.612863  156791 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:09:05.614140  156791 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:09:05.615387  156791 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 21:09:05.617627  156791 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 21:09:05.618095  156791 config.go:182] Loaded profile config "download-only-423014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0108 21:09:05.618140  156791 start.go:810] api.Load failed for download-only-423014: filestore "download-only-423014": Docker machine "download-only-423014" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 21:09:05.618210  156791 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 21:09:05.618236  156791 start.go:810] api.Load failed for download-only-423014: filestore "download-only-423014": Docker machine "download-only-423014" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 21:09:05.640250  156791 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:09:05.640338  156791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:05.689154  156791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-08 21:09:05.680936905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:05.689258  156791 docker.go:295] overlay module found
	I0108 21:09:05.691163  156791 out.go:97] Using the docker driver based on existing profile
	I0108 21:09:05.691186  156791 start.go:298] selected driver: docker
	I0108 21:09:05.691191  156791 start.go:902] validating driver "docker" against &{Name:download-only-423014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-423014 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:09:05.691330  156791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:05.740918  156791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-08 21:09:05.733148369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:05.741627  156791 cni.go:84] Creating CNI manager for ""
	I0108 21:09:05.741648  156791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 21:09:05.741658  156791 start_flags.go:321] config:
	{Name:download-only-423014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-423014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:09:05.743559  156791 out.go:97] Starting control plane node download-only-423014 in cluster download-only-423014
	I0108 21:09:05.743579  156791 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 21:09:05.744935  156791 out.go:97] Pulling base image v0.0.42-1703790982-17866 ...
	I0108 21:09:05.744957  156791 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:09:05.745088  156791 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 21:09:05.759515  156791 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d to local cache
	I0108 21:09:05.759678  156791 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory
	I0108 21:09:05.759698  156791 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory, skipping pull
	I0108 21:09:05.759704  156791 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in cache, skipping pull
	I0108 21:09:05.759717  156791 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d as a tarball
	I0108 21:09:05.774723  156791 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:09:05.774754  156791 cache.go:56] Caching tarball of preloaded images
	I0108 21:09:05.774871  156791 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:09:05.776602  156791 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 21:09:05.776619  156791 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:09:05.816174  156791 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:09:09.362462  156791 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:09:09.362563  156791 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:09:10.282912  156791 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:09:10.283054  156791 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/download-only-423014/config.json ...
	I0108 21:09:10.283247  156791 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:09:10.283481  156791 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-423014"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (5.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-423014 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-423014 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.439362984s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (5.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-423014
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-423014: exit status 85 (75.236663ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-423014 | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |          |
	|         | -p download-only-423014           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-423014 | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |          |
	|         | -p download-only-423014           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-423014 | jenkins | v1.32.0 | 08 Jan 24 21:09 UTC |          |
	|         | -p download-only-423014           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:09:11
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:09:11.063973  156926 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:09:11.064085  156926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:11.064094  156926 out.go:309] Setting ErrFile to fd 2...
	I0108 21:09:11.064099  156926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:11.064293  156926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	W0108 21:09:11.064419  156926 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-150013/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-150013/.minikube/config/config.json: no such file or directory
	I0108 21:09:11.064848  156926 out.go:303] Setting JSON to true
	I0108 21:09:11.065694  156926 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13903,"bootTime":1704734248,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:09:11.065763  156926 start.go:138] virtualization: kvm guest
	I0108 21:09:11.067882  156926 out.go:97] [download-only-423014] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:09:11.069501  156926 out.go:169] MINIKUBE_LOCATION=17866
	I0108 21:09:11.068096  156926 notify.go:220] Checking for updates...
	I0108 21:09:11.072358  156926 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:09:11.073777  156926 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:09:11.075097  156926 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:09:11.076412  156926 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 21:09:11.078820  156926 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 21:09:11.079486  156926 config.go:182] Loaded profile config "download-only-423014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 21:09:11.079549  156926 start.go:810] api.Load failed for download-only-423014: filestore "download-only-423014": Docker machine "download-only-423014" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 21:09:11.079710  156926 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 21:09:11.079763  156926 start.go:810] api.Load failed for download-only-423014: filestore "download-only-423014": Docker machine "download-only-423014" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 21:09:11.104110  156926 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:09:11.104200  156926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:11.154654  156926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 21:09:11.146680016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:11.154749  156926 docker.go:295] overlay module found
	I0108 21:09:11.156545  156926 out.go:97] Using the docker driver based on existing profile
	I0108 21:09:11.156564  156926 start.go:298] selected driver: docker
	I0108 21:09:11.156569  156926 start.go:902] validating driver "docker" against &{Name:download-only-423014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-423014 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:09:11.156700  156926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:11.208855  156926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 21:09:11.201169903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:11.209884  156926 cni.go:84] Creating CNI manager for ""
	I0108 21:09:11.209912  156926 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 21:09:11.209930  156926 start_flags.go:321] config:
	{Name:download-only-423014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-423014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:09:11.212020  156926 out.go:97] Starting control plane node download-only-423014 in cluster download-only-423014
	I0108 21:09:11.212047  156926 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 21:09:11.213377  156926 out.go:97] Pulling base image v0.0.42-1703790982-17866 ...
	I0108 21:09:11.213397  156926 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 21:09:11.213495  156926 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 21:09:11.228082  156926 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d to local cache
	I0108 21:09:11.228215  156926 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory
	I0108 21:09:11.228229  156926 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory, skipping pull
	I0108 21:09:11.228234  156926 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in cache, skipping pull
	I0108 21:09:11.228244  156926 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d as a tarball
	I0108 21:09:11.240458  156926 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 21:09:11.240480  156926 cache.go:56] Caching tarball of preloaded images
	I0108 21:09:11.240602  156926 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 21:09:11.242326  156926 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 21:09:11.242340  156926 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:09:11.274991  156926 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 21:09:14.998577  156926 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:09:14.998670  156926 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-150013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:09:15.801280  156926 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 21:09:15.801427  156926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/download-only-423014/config.json ...
	I0108 21:09:15.801651  156926 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 21:09:15.801833  156926 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17866-150013/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-423014"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-423014
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.28s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-400635 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-400635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-400635
--- PASS: TestDownloadOnlyKic (1.28s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-735016 --alsologtostderr --binary-mirror http://127.0.0.1:36943 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-735016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-735016
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (80.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-251717 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-251717 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m18.266693023s)
helpers_test.go:175: Cleaning up "offline-crio-251717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-251717
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-251717: (2.435223433s)
--- PASS: TestOffline (80.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-954584
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-954584: exit status 85 (69.75747ms)

                                                
                                                
-- stdout --
	* Profile "addons-954584" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-954584"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-954584
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-954584: exit status 85 (70.14773ms)

                                                
                                                
-- stdout --
	* Profile "addons-954584" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-954584"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (138.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-954584 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-954584 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m18.962584402s)
--- PASS: TestAddons/Setup (138.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 12.691013ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-mjp6n" [2b788f31-0fbc-4a01-9482-8c2af240ed16] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004572885s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lqthv" [1b8d1aac-9fd6-4221-beea-1a33b3cf142f] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00428206s
addons_test.go:340: (dbg) Run:  kubectl --context addons-954584 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-954584 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-954584 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.93479912s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 ip
2024/01/08 21:11:51 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-l62sr" [86220146-e007-438a-b8a6-b0a645c5c7a1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004264659s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-954584
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-954584: (5.960968302s)
--- PASS: TestAddons/parallel/InspektorGadget (11.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.627163ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-5zm94" [1333b768-bab4-4c7f-9cb2-984cb4bafd4e] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005161575s
addons_test.go:415: (dbg) Run:  kubectl --context addons-954584 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.64s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.59s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 12.258737ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-vxnlw" [c122796b-ffb5-464a-bcc9-4c4b75bcc423] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004710969s
addons_test.go:473: (dbg) Run:  kubectl --context addons-954584 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-954584 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.001132188s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 12.76753ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-954584 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-954584 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5539a66f-ef99-45b9-9031-42df89f75a89] Pending
helpers_test.go:344: "task-pv-pod" [5539a66f-ef99-45b9-9031-42df89f75a89] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5539a66f-ef99-45b9-9031-42df89f75a89] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003063002s
addons_test.go:584: (dbg) Run:  kubectl --context addons-954584 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-954584 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-954584 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-954584 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-954584 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-954584 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-954584 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f6eca97f-9691-47f8-89b4-9e4a0b34f24a] Pending
helpers_test.go:344: "task-pv-pod-restore" [f6eca97f-9691-47f8-89b4-9e4a0b34f24a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f6eca97f-9691-47f8-89b4-9e4a0b34f24a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003985965s
addons_test.go:626: (dbg) Run:  kubectl --context addons-954584 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-954584 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-954584 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-954584 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.51341922s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-954584 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-954584 --alsologtostderr -v=1: (1.240059699s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-9hf2t" [dff65ff4-096d-4eff-ad97-18df0c25e3c3] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-9hf2t" [dff65ff4-096d-4eff-ad97-18df0c25e3c3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-9hf2t" [dff65ff4-096d-4eff-ad97-18df0c25e3c3] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003416266s
--- PASS: TestAddons/parallel/Headlamp (13.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-92sjq" [a042a5a9-ca31-42b3-b4d2-b4a3b047961a] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003477455s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-954584
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-954584 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-954584 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-954584 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3a937611-5e7c-4324-98f4-ab01213078bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3a937611-5e7c-4324-98f4-ab01213078bd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3a937611-5e7c-4324-98f4-ab01213078bd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.021059062s
addons_test.go:891: (dbg) Run:  kubectl --context addons-954584 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 ssh "cat /opt/local-path-provisioner/pvc-6097a29d-4577-4b55-9867-558bcd95400c_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-954584 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-954584 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-954584 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-954584 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.946929306s)
--- PASS: TestAddons/parallel/LocalPath (52.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-m7f7g" [39bbe5de-807a-4462-af6d-a7c9fe467dd8] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004190137s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-954584
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-6jr4v" [0bd0d279-4421-4128-9349-4b3369fd993d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003975331s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-954584 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-954584 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-954584
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-954584: (11.929831267s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-954584
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-954584
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-954584
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (28.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-797571 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0108 21:41:38.048081  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-797571 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.581882772s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-797571 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-797571 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-797571 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-797571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-797571
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-797571: (1.964404069s)
--- PASS: TestCertOptions (28.16s)

                                                
                                    
x
+
TestCertExpiration (224.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-552373 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0108 21:41:02.362845  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-552373 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.034143508s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-552373 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0108 21:44:39.318542  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-552373 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.324503836s)
helpers_test.go:175: Cleaning up "cert-expiration-552373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-552373
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-552373: (2.567838879s)
--- PASS: TestCertExpiration (224.93s)

                                                
                                    
x
+
TestForceSystemdFlag (25.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-903721 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-903721 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.104150203s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-903721 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-903721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-903721
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-903721: (2.341062735s)
--- PASS: TestForceSystemdFlag (25.72s)

                                                
                                    
x
+
TestForceSystemdEnv (28.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-101581 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-101581 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.121827119s)
helpers_test.go:175: Cleaning up "force-systemd-env-101581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-101581
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-101581: (3.085650003s)
--- PASS: TestForceSystemdEnv (28.21s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.76s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.76s)

                                                
                                    
x
+
TestErrorSpam/setup (21.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-349227 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-349227 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-349227 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-349227 --driver=docker  --container-runtime=crio: (21.05643738s)
--- PASS: TestErrorSpam/setup (21.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 stop: (1.198605858s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-349227 --log_dir /tmp/nospam-349227 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17866-150013/.minikube/files/etc/test/nested/copy/156648/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-727506 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-727506 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.490036805s)
--- PASS: TestFunctional/serial/StartWithProxy (38.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-727506 --alsologtostderr -v=8
E0108 21:16:38.047866  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:38.053638  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:38.063944  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:38.084272  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:38.124516  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:38.204825  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:38.365212  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:38.685882  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:39.327052  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:40.607409  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:43.168320  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:16:48.288766  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-727506 --alsologtostderr -v=8: (29.535328179s)
functional_test.go:659: soft start took 29.536229476s for "functional-727506" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-727506 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-727506 /tmp/TestFunctionalserialCacheCmdcacheadd_local3402736483/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cache add minikube-local-cache-test:functional-727506
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cache delete minikube-local-cache-test:functional-727506
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-727506
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.315063ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
E0108 21:16:58.529567  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 kubectl -- --context functional-727506 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-727506 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-727506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 21:17:19.010402  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-727506 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.376832435s)
functional_test.go:757: restart took 32.376983479s for "functional-727506" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-727506 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-727506 logs: (1.332827532s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 logs --file /tmp/TestFunctionalserialLogsFileCmd1399216498/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-727506 logs --file /tmp/TestFunctionalserialLogsFileCmd1399216498/001/logs.txt: (1.365590174s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-727506 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-727506
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-727506: exit status 115 (363.782342ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30818 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-727506 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 config get cpus: exit status 14 (96.851926ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 config get cpus: exit status 14 (87.626161ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-727506 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-727506 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 192087: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-727506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-727506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (367.02954ms)

                                                
                                                
-- stdout --
	* [functional-727506] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:18:02.537181  191557 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:18:02.537281  191557 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:02.537288  191557 out.go:309] Setting ErrFile to fd 2...
	I0108 21:18:02.537293  191557 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:02.537492  191557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:18:02.538089  191557 out.go:303] Setting JSON to false
	I0108 21:18:02.539412  191557 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":14435,"bootTime":1704734248,"procs":698,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:18:02.539488  191557 start.go:138] virtualization: kvm guest
	I0108 21:18:02.560066  191557 out.go:177] * [functional-727506] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:18:02.565229  191557 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:18:02.565306  191557 notify.go:220] Checking for updates...
	I0108 21:18:02.666042  191557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:18:02.671354  191557 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:18:02.689178  191557 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:18:02.692812  191557 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:18:02.737636  191557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:18:02.741924  191557 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:18:02.742628  191557 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:18:02.764833  191557 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:18:02.764960  191557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:18:02.817972  191557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:49 SystemTime:2024-01-08 21:18:02.809234652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:18:02.818070  191557 docker.go:295] overlay module found
	I0108 21:18:02.823263  191557 out.go:177] * Using the docker driver based on existing profile
	I0108 21:18:02.825095  191557 start.go:298] selected driver: docker
	I0108 21:18:02.825113  191557 start.go:902] validating driver "docker" against &{Name:functional-727506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-727506 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:18:02.825248  191557 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:18:02.830871  191557 out.go:177] 
	W0108 21:18:02.832638  191557 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 21:18:02.834250  191557 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-727506 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-727506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-727506 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (334.950347ms)

                                                
                                                
-- stdout --
	* [functional-727506] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:18:03.249035  191756 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:18:03.249196  191756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:03.249209  191756 out.go:309] Setting ErrFile to fd 2...
	I0108 21:18:03.249216  191756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:03.249542  191756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:18:03.250104  191756 out.go:303] Setting JSON to false
	I0108 21:18:03.251521  191756 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":14435,"bootTime":1704734248,"procs":698,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:18:03.251596  191756 start.go:138] virtualization: kvm guest
	I0108 21:18:03.278755  191756 out.go:177] * [functional-727506] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 21:18:03.282683  191756 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:18:03.282685  191756 notify.go:220] Checking for updates...
	I0108 21:18:03.305651  191756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:18:03.307650  191756 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:18:03.314856  191756 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:18:03.316684  191756 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:18:03.334206  191756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:18:03.336115  191756 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:18:03.336642  191756 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:18:03.359842  191756 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:18:03.359991  191756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:18:03.414808  191756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:49 SystemTime:2024-01-08 21:18:03.405228206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:18:03.414895  191756 docker.go:295] overlay module found
	I0108 21:18:03.431615  191756 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 21:18:03.435958  191756 start.go:298] selected driver: docker
	I0108 21:18:03.435983  191756 start.go:902] validating driver "docker" against &{Name:functional-727506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-727506 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:18:03.436100  191756 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:18:03.496534  191756 out.go:177] 
	W0108 21:18:03.501331  191756 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 21:18:03.515719  191756 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-727506 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-727506 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-hk2nt" [87634f57-3f2d-4317-b3f4-1aac3f882c7a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-hk2nt" [87634f57-3f2d-4317-b3f4-1aac3f882c7a] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004486782s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32489
functional_test.go:1674: http://192.168.49.2:32489: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-hk2nt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32489
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1c54e821-6b0c-4f5e-b911-b6e620faf455] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004829242s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-727506 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-727506 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-727506 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-727506 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-727506 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [936e28cb-f84a-4628-a825-4bac8114ff8b] Pending
helpers_test.go:344: "sp-pod" [936e28cb-f84a-4628-a825-4bac8114ff8b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [936e28cb-f84a-4628-a825-4bac8114ff8b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.060582556s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-727506 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-727506 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-727506 delete -f testdata/storage-provisioner/pod.yaml: (1.267424128s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-727506 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [118e3b2b-7e0d-452f-85b3-c4dfeac886cb] Pending
helpers_test.go:344: "sp-pod" [118e3b2b-7e0d-452f-85b3-c4dfeac886cb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [118e3b2b-7e0d-452f-85b3-c4dfeac886cb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004294391s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-727506 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh -n functional-727506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cp functional-727506:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd705786100/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh -n functional-727506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh -n functional-727506 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-727506 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-4fwp4" [cafd3ae9-f651-4261-a8e2-02abb203672a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-4fwp4" [cafd3ae9-f651-4261-a8e2-02abb203672a] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.00391201s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-727506 exec mysql-859648c796-4fwp4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-727506 exec mysql-859648c796-4fwp4 -- mysql -ppassword -e "show databases;": exit status 1 (201.089588ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-727506 exec mysql-859648c796-4fwp4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-727506 exec mysql-859648c796-4fwp4 -- mysql -ppassword -e "show databases;": exit status 1 (158.153891ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-727506 exec mysql-859648c796-4fwp4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/156648/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo cat /etc/test/nested/copy/156648/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/156648.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo cat /etc/ssl/certs/156648.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/156648.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo cat /usr/share/ca-certificates/156648.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/1566482.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo cat /etc/ssl/certs/1566482.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/1566482.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo cat /usr/share/ca-certificates/1566482.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-727506 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 ssh "sudo systemctl is-active docker": exit status 1 (343.230795ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 ssh "sudo systemctl is-active containerd": exit status 1 (360.333029ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-727506 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-727506 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-727506 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 187813: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-727506 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-727506 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-727506 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e0c73c55-4e0f-4ffa-972b-1e31d5dd01ff] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e0c73c55-4e0f-4ffa-972b-1e31d5dd01ff] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.004524147s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 update-context --alsologtostderr -v=2
2024/01/08 21:18:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-727506 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-727506 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-k2zj6" [87f66fb6-42d3-4b87-9164-18cc5fb3010c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-k2zj6" [87f66fb6-42d3-4b87-9164-18cc5fb3010c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004370196s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-727506 service list: (1.03761996s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 service list -o json
functional_test.go:1493: Took "912.38637ms" to run "out/minikube-linux-amd64 -p functional-727506 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32014
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-727506 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.174.214 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-727506 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
E0108 21:17:59.970749  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
functional_test.go:1314: Took "293.911654ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "66.242588ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32014
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "316.232257ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "62.020941ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdany-port2735233577/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704748680584089533" to /tmp/TestFunctionalparallelMountCmdany-port2735233577/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704748680584089533" to /tmp/TestFunctionalparallelMountCmdany-port2735233577/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704748680584089533" to /tmp/TestFunctionalparallelMountCmdany-port2735233577/001/test-1704748680584089533
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.688621ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 21:18 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 21:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 21:18 test-1704748680584089533
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh cat /mount-9p/test-1704748680584089533
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-727506 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c2079afb-4e11-4fb1-beb4-348f08907344] Pending
helpers_test.go:344: "busybox-mount" [c2079afb-4e11-4fb1-beb4-348f08907344] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c2079afb-4e11-4fb1-beb4-348f08907344] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c2079afb-4e11-4fb1-beb4-348f08907344] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00408699s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-727506 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdany-port2735233577/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-727506 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-727506
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-727506 image ls --format short --alsologtostderr:
I0108 21:18:22.856975  195017 out.go:296] Setting OutFile to fd 1 ...
I0108 21:18:22.857203  195017 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:22.857217  195017 out.go:309] Setting ErrFile to fd 2...
I0108 21:18:22.857226  195017 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:22.857733  195017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
I0108 21:18:22.858965  195017 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:22.859094  195017 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:22.859716  195017 cli_runner.go:164] Run: docker container inspect functional-727506 --format={{.State.Status}}
I0108 21:18:22.879334  195017 ssh_runner.go:195] Run: systemctl --version
I0108 21:18:22.879397  195017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-727506
I0108 21:18:22.902844  195017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/functional-727506/id_rsa Username:docker}
I0108 21:18:23.001762  195017 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-727506 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | alpine             | 529b5644c430c | 44.4MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/google-containers/addon-resizer  | functional-727506  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-727506 image ls --format table --alsologtostderr:
I0108 21:18:23.095846  195215 out.go:296] Setting OutFile to fd 1 ...
I0108 21:18:23.095990  195215 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:23.096004  195215 out.go:309] Setting ErrFile to fd 2...
I0108 21:18:23.096011  195215 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:23.096194  195215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
I0108 21:18:23.096847  195215 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:23.096961  195215 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:23.097371  195215 cli_runner.go:164] Run: docker container inspect functional-727506 --format={{.State.Status}}
I0108 21:18:23.115099  195215 ssh_runner.go:195] Run: systemctl --version
I0108 21:18:23.115147  195215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-727506
I0108 21:18:23.133353  195215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/functional-727506/id_rsa Username:docker}
I0108 21:18:23.229940  195215 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-727506 image ls --format json --alsologtostderr:
[{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha
256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/
kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["reg
istry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b2712680
5d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-727506"],"size":"34114467"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[
"registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":["docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b
50d4fbeb436d8d2b55631c2a89935a425b417eb95212686","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44405005"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-727506 image ls --format json --alsologtostderr:
I0108 21:18:22.848937  195018 out.go:296] Setting OutFile to fd 1 ...
I0108 21:18:22.849052  195018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:22.849089  195018 out.go:309] Setting ErrFile to fd 2...
I0108 21:18:22.849102  195018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:22.849331  195018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
I0108 21:18:22.849995  195018 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:22.850114  195018 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:22.850582  195018 cli_runner.go:164] Run: docker container inspect functional-727506 --format={{.State.Status}}
I0108 21:18:22.869481  195018 ssh_runner.go:195] Run: systemctl --version
I0108 21:18:22.869550  195018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-727506
I0108 21:18:22.898767  195018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/functional-727506/id_rsa Username:docker}
I0108 21:18:23.001647  195018 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-727506 image ls --format yaml --alsologtostderr:
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-727506
size: "34114467"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "44405005"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-727506 image ls --format yaml --alsologtostderr:
I0108 21:18:22.838319  195019 out.go:296] Setting OutFile to fd 1 ...
I0108 21:18:22.838512  195019 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:22.838541  195019 out.go:309] Setting ErrFile to fd 2...
I0108 21:18:22.838558  195019 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:22.838913  195019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
I0108 21:18:22.839716  195019 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:22.839901  195019 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:22.840377  195019 cli_runner.go:164] Run: docker container inspect functional-727506 --format={{.State.Status}}
I0108 21:18:22.860124  195019 ssh_runner.go:195] Run: systemctl --version
I0108 21:18:22.860178  195019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-727506
I0108 21:18:22.883945  195019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/functional-727506/id_rsa Username:docker}
I0108 21:18:22.985490  195019 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 ssh pgrep buildkitd: exit status 1 (302.95543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image build -t localhost/my-image:functional-727506 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-727506 image build -t localhost/my-image:functional-727506 testdata/build --alsologtostderr: (1.312013173s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-727506 image build -t localhost/my-image:functional-727506 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6f7c04ea8cc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-727506
--> 3318dcca255
Successfully tagged localhost/my-image:functional-727506
3318dcca255ee3200d7ddb1a911619850e114925ccd6c76be85d078670709a8d
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-727506 image build -t localhost/my-image:functional-727506 testdata/build --alsologtostderr:
I0108 21:18:23.126950  195228 out.go:296] Setting OutFile to fd 1 ...
I0108 21:18:23.127261  195228 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:23.127274  195228 out.go:309] Setting ErrFile to fd 2...
I0108 21:18:23.127281  195228 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:18:23.127555  195228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
I0108 21:18:23.128243  195228 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:23.128738  195228 config.go:182] Loaded profile config "functional-727506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:18:23.129153  195228 cli_runner.go:164] Run: docker container inspect functional-727506 --format={{.State.Status}}
I0108 21:18:23.146322  195228 ssh_runner.go:195] Run: systemctl --version
I0108 21:18:23.146363  195228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-727506
I0108 21:18:23.162691  195228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/functional-727506/id_rsa Username:docker}
I0108 21:18:23.258203  195228 build_images.go:151] Building image from path: /tmp/build.3139465099.tar
I0108 21:18:23.258265  195228 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 21:18:23.268951  195228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3139465099.tar
I0108 21:18:23.272521  195228 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3139465099.tar: stat -c "%s %y" /var/lib/minikube/build/build.3139465099.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3139465099.tar': No such file or directory
I0108 21:18:23.272553  195228 ssh_runner.go:362] scp /tmp/build.3139465099.tar --> /var/lib/minikube/build/build.3139465099.tar (3072 bytes)
I0108 21:18:23.294913  195228 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3139465099
I0108 21:18:23.302706  195228 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3139465099 -xf /var/lib/minikube/build/build.3139465099.tar
I0108 21:18:23.310672  195228 crio.go:297] Building image: /var/lib/minikube/build/build.3139465099
I0108 21:18:23.310758  195228 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-727506 /var/lib/minikube/build/build.3139465099 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0108 21:18:24.354299  195228 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-727506 /var/lib/minikube/build/build.3139465099 --cgroup-manager=cgroupfs: (1.043508382s)
I0108 21:18:24.354359  195228 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3139465099
I0108 21:18:24.362541  195228 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3139465099.tar
I0108 21:18:24.371162  195228 build_images.go:207] Built localhost/my-image:functional-727506 from /tmp/build.3139465099.tar
I0108 21:18:24.371191  195228 build_images.go:123] succeeded building to: functional-727506
I0108 21:18:24.371195  195228 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-727506
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image load --daemon gcr.io/google-containers/addon-resizer:functional-727506 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-727506 image load --daemon gcr.io/google-containers/addon-resizer:functional-727506 --alsologtostderr: (9.392929016s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdspecific-port2008714301/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.049496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdspecific-port2008714301/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 ssh "sudo umount -f /mount-9p": exit status 1 (328.846428ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-727506 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdspecific-port2008714301/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3868156937/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3868156937/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3868156937/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T" /mount1: exit status 1 (438.256112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-727506 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3868156937/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3868156937/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-727506 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3868156937/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image load --daemon gcr.io/google-containers/addon-resizer:functional-727506 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-727506 image load --daemon gcr.io/google-containers/addon-resizer:functional-727506 --alsologtostderr: (3.234442203s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-727506
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image load --daemon gcr.io/google-containers/addon-resizer:functional-727506 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-727506 image load --daemon gcr.io/google-containers/addon-resizer:functional-727506 --alsologtostderr: (3.365448885s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image save gcr.io/google-containers/addon-resizer:functional-727506 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image rm gcr.io/google-containers/addon-resizer:functional-727506 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-727506
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-727506 image save --daemon gcr.io/google-containers/addon-resizer:functional-727506 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-727506
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-727506
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-727506
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-727506
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (60.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-177638 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0108 21:19:21.891570  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-177638 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m0.230793805s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (60.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.72s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-177638 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-177638 addons enable ingress --alsologtostderr -v=5: (10.717913114s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.72s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-177638 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-635337 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0108 21:22:38.240678  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:38.245955  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:38.256259  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:38.276602  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:38.316917  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:38.397313  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:38.557764  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:38.878342  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:39.519355  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:40.800069  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:43.361887  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:48.482588  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:22:58.722985  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:23:19.203621  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-635337 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m10.213042656s)
--- PASS: TestJSONOutput/start/Command (70.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-635337 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-635337 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-635337 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-635337 --output=json --user=testUser: (5.772789844s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-948935 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-948935 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.31427ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"374671a2-bad2-464b-b617-a04e76a11b4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-948935] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c4c05bc-02bb-47e3-a742-8a4d952b9258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17866"}}
	{"specversion":"1.0","id":"876bb553-ff05-4c81-88e4-bec4e2d2cf4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a145e797-1967-4c55-a757-4392fa5c733f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig"}}
	{"specversion":"1.0","id":"f9f55849-8ea1-4d75-935a-8e1bc4ca3dbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube"}}
	{"specversion":"1.0","id":"b19c71e6-ad4e-4ce4-9043-f4fae6876502","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e8be43be-3f8e-4dd3-b94f-1ef3f10294c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2abcc6d2-3d9c-437f-ac05-8d0ef53e1f20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-948935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-948935
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-231601 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-231601 --network=: (27.279477367s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-231601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-231601
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-231601: (1.65680804s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-175868 --network=bridge
E0108 21:24:39.317637  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:39.322890  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:39.333179  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:39.353470  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:39.393817  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:39.474254  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:39.634619  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:39.955158  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:40.595908  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:41.876591  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:44.437570  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:24:49.557922  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-175868 --network=bridge: (22.440270944s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-175868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-175868
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-175868: (1.902190347s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.36s)

                                                
                                    
x
+
TestKicExistingNetwork (27.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-868053 --network=existing-network
E0108 21:24:59.798122  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-868053 --network=existing-network: (25.094712076s)
helpers_test.go:175: Cleaning up "existing-network-868053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-868053
E0108 21:25:20.278990  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-868053: (1.880524472s)
--- PASS: TestKicExistingNetwork (27.10s)

                                                
                                    
x
+
TestKicCustomSubnet (26.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-131508 --subnet=192.168.60.0/24
E0108 21:25:22.084562  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-131508 --subnet=192.168.60.0/24: (24.838831198s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-131508 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-131508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-131508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-131508: (2.06519232s)
--- PASS: TestKicCustomSubnet (26.92s)

                                                
                                    
x
+
TestKicStaticIP (27.12s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-928688 --static-ip=192.168.200.200
E0108 21:26:01.239476  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-928688 --static-ip=192.168.200.200: (24.983820724s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-928688 ip
helpers_test.go:175: Cleaning up "static-ip-928688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-928688
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-928688: (1.99776214s)
--- PASS: TestKicStaticIP (27.12s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-373218 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-373218 --driver=docker  --container-runtime=crio: (22.733933879s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-375910 --driver=docker  --container-runtime=crio
E0108 21:26:38.048225  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-375910 --driver=docker  --container-runtime=crio: (21.132564587s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-373218
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-375910
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-375910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-375910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-375910: (1.869337394s)
helpers_test.go:175: Cleaning up "first-373218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-373218
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-373218: (2.266298296s)
--- PASS: TestMinikubeProfile (49.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-971473 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-971473 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.328439987s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-971473 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-991082 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-991082 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.441894861s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-991082 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-971473 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-971473 --alsologtostderr -v=5: (1.633989517s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-991082 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-991082
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-991082: (1.216514555s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-991082
E0108 21:27:23.160252  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-991082: (6.267522344s)
--- PASS: TestMountStart/serial/RestartStopped (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-991082 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (87.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-379549 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0108 21:27:38.240121  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:28:05.925613  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-379549 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m27.291629895s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (87.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-379549 -- rollout status deployment/busybox: (1.316018354s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-dmq2z -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-hncds -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-dmq2z -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-hncds -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-dmq2z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-379549 -- exec busybox-5bc68d56bd-hncds -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-379549 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-379549 -v 3 --alsologtostderr: (19.062374389s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-379549 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp testdata/cp-test.txt multinode-379549:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2864192118/001/cp-test_multinode-379549.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549:/home/docker/cp-test.txt multinode-379549-m02:/home/docker/cp-test_multinode-379549_multinode-379549-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m02 "sudo cat /home/docker/cp-test_multinode-379549_multinode-379549-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549:/home/docker/cp-test.txt multinode-379549-m03:/home/docker/cp-test_multinode-379549_multinode-379549-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m03 "sudo cat /home/docker/cp-test_multinode-379549_multinode-379549-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp testdata/cp-test.txt multinode-379549-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2864192118/001/cp-test_multinode-379549-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549-m02:/home/docker/cp-test.txt multinode-379549:/home/docker/cp-test_multinode-379549-m02_multinode-379549.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549 "sudo cat /home/docker/cp-test_multinode-379549-m02_multinode-379549.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549-m02:/home/docker/cp-test.txt multinode-379549-m03:/home/docker/cp-test_multinode-379549-m02_multinode-379549-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m03 "sudo cat /home/docker/cp-test_multinode-379549-m02_multinode-379549-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp testdata/cp-test.txt multinode-379549-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2864192118/001/cp-test_multinode-379549-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549-m03:/home/docker/cp-test.txt multinode-379549:/home/docker/cp-test_multinode-379549-m03_multinode-379549.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549 "sudo cat /home/docker/cp-test_multinode-379549-m03_multinode-379549.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 cp multinode-379549-m03:/home/docker/cp-test.txt multinode-379549-m02:/home/docker/cp-test_multinode-379549-m03_multinode-379549-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 ssh -n multinode-379549-m02 "sudo cat /home/docker/cp-test_multinode-379549-m03_multinode-379549-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-379549 node stop m03: (1.206613547s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-379549 status: exit status 7 (481.248131ms)

                                                
                                                
-- stdout --
	multinode-379549
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-379549-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-379549-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-379549 status --alsologtostderr: exit status 7 (478.382524ms)

                                                
                                                
-- stdout --
	multinode-379549
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-379549-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-379549-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:29:36.046416  253435 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:29:36.046675  253435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:29:36.046685  253435 out.go:309] Setting ErrFile to fd 2...
	I0108 21:29:36.046690  253435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:29:36.046922  253435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:29:36.047098  253435 out.go:303] Setting JSON to false
	I0108 21:29:36.047143  253435 mustload.go:65] Loading cluster: multinode-379549
	I0108 21:29:36.047258  253435 notify.go:220] Checking for updates...
	I0108 21:29:36.047702  253435 config.go:182] Loaded profile config "multinode-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:29:36.047724  253435 status.go:255] checking status of multinode-379549 ...
	I0108 21:29:36.048226  253435 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Status}}
	I0108 21:29:36.066400  253435 status.go:330] multinode-379549 host status = "Running" (err=<nil>)
	I0108 21:29:36.066434  253435 host.go:66] Checking if "multinode-379549" exists ...
	I0108 21:29:36.066719  253435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-379549
	I0108 21:29:36.083846  253435 host.go:66] Checking if "multinode-379549" exists ...
	I0108 21:29:36.084112  253435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:29:36.084168  253435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549
	I0108 21:29:36.100123  253435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549/id_rsa Username:docker}
	I0108 21:29:36.194584  253435 ssh_runner.go:195] Run: systemctl --version
	I0108 21:29:36.198579  253435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:29:36.208908  253435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:29:36.264908  253435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-08 21:29:36.256218676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:29:36.265484  253435 kubeconfig.go:92] found "multinode-379549" server: "https://192.168.58.2:8443"
	I0108 21:29:36.265512  253435 api_server.go:166] Checking apiserver status ...
	I0108 21:29:36.265546  253435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:29:36.275856  253435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	I0108 21:29:36.284079  253435 api_server.go:182] apiserver freezer: "5:freezer:/docker/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/crio/crio-c0e5cf479b049cc3a91b75f42ad046d99966a8946acfcb7b4ae874dbde1610c6"
	I0108 21:29:36.284144  253435 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6363bf6a0fa165f3dc81661834e1aa6385238760cfcba75c8c1a781a69e042ac/crio/crio-c0e5cf479b049cc3a91b75f42ad046d99966a8946acfcb7b4ae874dbde1610c6/freezer.state
	I0108 21:29:36.291578  253435 api_server.go:204] freezer state: "THAWED"
	I0108 21:29:36.291604  253435 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 21:29:36.296585  253435 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 21:29:36.296609  253435 status.go:421] multinode-379549 apiserver status = Running (err=<nil>)
	I0108 21:29:36.296618  253435 status.go:257] multinode-379549 status: &{Name:multinode-379549 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:29:36.296645  253435 status.go:255] checking status of multinode-379549-m02 ...
	I0108 21:29:36.296904  253435 cli_runner.go:164] Run: docker container inspect multinode-379549-m02 --format={{.State.Status}}
	I0108 21:29:36.312948  253435 status.go:330] multinode-379549-m02 host status = "Running" (err=<nil>)
	I0108 21:29:36.312974  253435 host.go:66] Checking if "multinode-379549-m02" exists ...
	I0108 21:29:36.313292  253435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-379549-m02
	I0108 21:29:36.329502  253435 host.go:66] Checking if "multinode-379549-m02" exists ...
	I0108 21:29:36.329797  253435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:29:36.329843  253435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-379549-m02
	I0108 21:29:36.345522  253435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17866-150013/.minikube/machines/multinode-379549-m02/id_rsa Username:docker}
	I0108 21:29:36.438307  253435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:29:36.448172  253435 status.go:257] multinode-379549-m02 status: &{Name:multinode-379549-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:29:36.448201  253435 status.go:255] checking status of multinode-379549-m03 ...
	I0108 21:29:36.448485  253435 cli_runner.go:164] Run: docker container inspect multinode-379549-m03 --format={{.State.Status}}
	I0108 21:29:36.464263  253435 status.go:330] multinode-379549-m03 host status = "Stopped" (err=<nil>)
	I0108 21:29:36.464294  253435 status.go:343] host is not running, skipping remaining checks
	I0108 21:29:36.464300  253435 status.go:257] multinode-379549-m03 status: &{Name:multinode-379549-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 node start m03 --alsologtostderr
E0108 21:29:39.317622  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-379549 node start m03 --alsologtostderr: (10.827800137s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-379549
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-379549
E0108 21:30:07.001757  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-379549: (24.799402155s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-379549 --wait=true -v=8 --alsologtostderr
E0108 21:31:38.047900  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-379549 --wait=true -v=8 --alsologtostderr: (1m28.834395529s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-379549
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-379549 node delete m03: (4.104190715s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-379549 stop: (23.645979889s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-379549 status: exit status 7 (95.702299ms)

                                                
                                                
-- stdout --
	multinode-379549
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-379549-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-379549 status --alsologtostderr: exit status 7 (96.850878ms)

                                                
                                                
-- stdout --
	multinode-379549
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-379549-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:32:10.263001  263259 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:32:10.263160  263259 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:10.263169  263259 out.go:309] Setting ErrFile to fd 2...
	I0108 21:32:10.263174  263259 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:10.263380  263259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:32:10.263543  263259 out.go:303] Setting JSON to false
	I0108 21:32:10.263577  263259 mustload.go:65] Loading cluster: multinode-379549
	I0108 21:32:10.263690  263259 notify.go:220] Checking for updates...
	I0108 21:32:10.263965  263259 config.go:182] Loaded profile config "multinode-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:32:10.263979  263259 status.go:255] checking status of multinode-379549 ...
	I0108 21:32:10.264392  263259 cli_runner.go:164] Run: docker container inspect multinode-379549 --format={{.State.Status}}
	I0108 21:32:10.281521  263259 status.go:330] multinode-379549 host status = "Stopped" (err=<nil>)
	I0108 21:32:10.281541  263259 status.go:343] host is not running, skipping remaining checks
	I0108 21:32:10.281547  263259 status.go:257] multinode-379549 status: &{Name:multinode-379549 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:32:10.281574  263259 status.go:255] checking status of multinode-379549-m02 ...
	I0108 21:32:10.281844  263259 cli_runner.go:164] Run: docker container inspect multinode-379549-m02 --format={{.State.Status}}
	I0108 21:32:10.301347  263259 status.go:330] multinode-379549-m02 host status = "Stopped" (err=<nil>)
	I0108 21:32:10.301371  263259 status.go:343] host is not running, skipping remaining checks
	I0108 21:32:10.301379  263259 status.go:257] multinode-379549-m02 status: &{Name:multinode-379549-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-379549 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0108 21:32:38.240114  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
E0108 21:33:01.092724  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-379549 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.748542893s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-379549 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.34s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-379549
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-379549-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-379549-m02 --driver=docker  --container-runtime=crio: exit status 14 (80.325141ms)

                                                
                                                
-- stdout --
	* [multinode-379549-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-379549-m02' is duplicated with machine name 'multinode-379549-m02' in profile 'multinode-379549'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-379549-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-379549-m03 --driver=docker  --container-runtime=crio: (21.112418111s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-379549
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-379549: exit status 80 (291.288123ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-379549
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-379549-m03 already exists in multinode-379549-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-379549-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-379549-m03: (1.870204159s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.41s)

                                                
                                    
x
+
TestPreload (137.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-282382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0108 21:34:39.317585  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-282382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m6.560805018s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-282382 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-282382
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-282382: (5.721858184s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-282382 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-282382 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m1.679566205s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-282382 image list
helpers_test.go:175: Cleaning up "test-preload-282382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-282382
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-282382: (2.284281623s)
--- PASS: TestPreload (137.22s)

                                                
                                    
x
+
TestScheduledStopUnix (100.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-279108 --memory=2048 --driver=docker  --container-runtime=crio
E0108 21:36:38.048735  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-279108 --memory=2048 --driver=docker  --container-runtime=crio: (23.606462361s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-279108 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-279108 -n scheduled-stop-279108
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-279108 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-279108 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-279108 -n scheduled-stop-279108
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-279108
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-279108 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0108 21:37:38.240337  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-279108
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-279108: exit status 7 (79.975753ms)

                                                
                                                
-- stdout --
	scheduled-stop-279108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-279108 -n scheduled-stop-279108
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-279108 -n scheduled-stop-279108: exit status 7 (77.732438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-279108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-279108
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-279108: (5.385167457s)
--- PASS: TestScheduledStopUnix (100.46s)

                                                
                                    
x
+
TestInsufficientStorage (10.36s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-976678 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-976678 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.954737349s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd74a30c-d66c-41cb-9921-6089d97667d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-976678] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8039863-74b6-4e8e-a0f0-e612b04d67d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17866"}}
	{"specversion":"1.0","id":"3858fae9-2243-46b6-8102-1868e6db39e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"58dd3650-f765-4146-9146-e6e9416331fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig"}}
	{"specversion":"1.0","id":"f4d29f03-e758-4581-b612-61877704a9e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube"}}
	{"specversion":"1.0","id":"3a19363a-a17c-4855-b82a-23c01d86c71e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3a5cd6ac-d9ba-4223-a47e-0f6cea372a8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b96612e3-3b79-4454-b393-a46de9503e0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"55fe074d-0eef-4fc8-8ca9-6e8cc658407e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"263481e1-9122-4153-a0de-8cc767caebc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f3439cb-171b-456d-8f34-472582af4514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e6a2bd71-6e51-4318-82d9-ad6d9bc8b674","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-976678 in cluster insufficient-storage-976678","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0178c8a-02c6-4dab-baf1-4d765673fc8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703790982-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6c39c47-9831-4cc5-a666-11a7e9d79f30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"72fd4db3-fea5-4af0-96da-9b5ba8b90cb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-976678 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-976678 --output=json --layout=cluster: exit status 7 (277.134952ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-976678","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-976678","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:38:06.620885  284263 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-976678" does not appear in /home/jenkins/minikube-integration/17866-150013/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-976678 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-976678 --output=json --layout=cluster: exit status 7 (278.411185ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-976678","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-976678","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:38:06.900205  284351 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-976678" does not appear in /home/jenkins/minikube-integration/17866-150013/kubeconfig
	E0108 21:38:06.909880  284351 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/insufficient-storage-976678/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-976678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-976678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-976678: (1.850813979s)
--- PASS: TestInsufficientStorage (10.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (351.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-922854 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-922854 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.29257209s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-922854
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-922854: (1.251188057s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-922854 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-922854 status --format={{.Host}}: exit status 7 (85.595745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-922854 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-922854 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.688451823s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-922854 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-922854 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-922854 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (103.104319ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-922854] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-922854
	    minikube start -p kubernetes-upgrade-922854 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9228542 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-922854 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-922854 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-922854 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.213009494s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-922854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-922854
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-922854: (3.909084561s)
--- PASS: TestKubernetesUpgrade (351.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (156.86s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2880549979.exe start -p missing-upgrade-372281 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2880549979.exe start -p missing-upgrade-372281 --memory=2200 --driver=docker  --container-runtime=crio: (1m28.67572319s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-372281
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-372281
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-372281 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0108 21:39:39.318379  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-372281 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.258828871s)
helpers_test.go:175: Cleaning up "missing-upgrade-372281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-372281
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-372281: (3.861226729s)
--- PASS: TestMissingContainerUpgrade (156.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-292748 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-292748 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (85.996641ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-292748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-292748 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-292748 --driver=docker  --container-runtime=crio: (36.665956809s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-292748 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-292748 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-292748 --no-kubernetes --driver=docker  --container-runtime=crio: (7.66789243s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-292748 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-292748 status -o json: exit status 2 (354.85634ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-292748","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-292748
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-292748: (2.110566445s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-292748 --no-kubernetes --driver=docker  --container-runtime=crio
E0108 21:39:01.286536  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-292748 --no-kubernetes --driver=docker  --container-runtime=crio: (6.930910828s)
--- PASS: TestNoKubernetes/serial/Start (6.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-292748 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-292748 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.505307ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-292748
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-292748: (1.245398877s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-292748 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-292748 --driver=docker  --container-runtime=crio: (9.510217423s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-292748 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-292748 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.185192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-304512
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.56s)

                                                
                                    
x
+
TestPause/serial/Start (69.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-852462 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-852462 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m9.156966443s)
--- PASS: TestPause/serial/Start (69.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-104214 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-104214 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (166.277176ms)

                                                
                                                
-- stdout --
	* [false-104214] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:40:49.235938  327027 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:40:49.236264  327027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:40:49.236277  327027 out.go:309] Setting ErrFile to fd 2...
	I0108 21:40:49.236285  327027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:40:49.236598  327027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-150013/.minikube/bin
	I0108 21:40:49.237221  327027 out.go:303] Setting JSON to false
	I0108 21:40:49.238987  327027 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":15801,"bootTime":1704734248,"procs":783,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:40:49.239055  327027 start.go:138] virtualization: kvm guest
	I0108 21:40:49.241482  327027 out.go:177] * [false-104214] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:40:49.243165  327027 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:40:49.243139  327027 notify.go:220] Checking for updates...
	I0108 21:40:49.244614  327027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:40:49.246062  327027 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-150013/kubeconfig
	I0108 21:40:49.247557  327027 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-150013/.minikube
	I0108 21:40:49.249107  327027 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:40:49.250458  327027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:40:49.252225  327027 config.go:182] Loaded profile config "force-systemd-env-101581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:40:49.252322  327027 config.go:182] Loaded profile config "kubernetes-upgrade-922854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:40:49.252401  327027 config.go:182] Loaded profile config "pause-852462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:40:49.252473  327027 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:40:49.274427  327027 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 21:40:49.274566  327027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:40:49.332683  327027 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:65 SystemTime:2024-01-08 21:40:49.320672394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:40:49.332853  327027 docker.go:295] overlay module found
	I0108 21:40:49.334908  327027 out.go:177] * Using the docker driver based on user configuration
	I0108 21:40:49.336145  327027 start.go:298] selected driver: docker
	I0108 21:40:49.336165  327027 start.go:902] validating driver "docker" against <nil>
	I0108 21:40:49.336179  327027 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:40:49.338909  327027 out.go:177] 
	W0108 21:40:49.340291  327027 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0108 21:40:49.341636  327027 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-104214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-104214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:22 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-922854
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-852462
contexts:
- context:
cluster: kubernetes-upgrade-922854
user: kubernetes-upgrade-922854
name: kubernetes-upgrade-922854
- context:
cluster: pause-852462
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-852462
name: pause-852462
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-922854
user:
client-certificate: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/kubernetes-upgrade-922854/client.crt
client-key: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/kubernetes-upgrade-922854/client.key
- name: pause-852462
user:
client-certificate: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/pause-852462/client.crt
client-key: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/pause-852462/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-104214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104214"

                                                
                                                
----------------------- debugLogs end: false-104214 [took: 3.430798745s] --------------------------------
helpers_test.go:175: Cleaning up "false-104214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-104214
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-852462 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-852462 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.475051116s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.50s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-852462 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-852462 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-852462 --output=json --layout=cluster: exit status 2 (339.45675ms)

                                                
                                                
-- stdout --
	{"Name":"pause-852462","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-852462","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-852462 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-852462 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.68s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-852462 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-852462 --alsologtostderr -v=5: (2.680476349s)
--- PASS: TestPause/serial/DeletePaused (2.68s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.94s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.881507781s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-852462
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-852462: exit status 1 (18.25849ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-852462: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (119.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-520015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-520015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m59.510227564s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (119.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-844316 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:42:38.239729  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-844316 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (45.67300724s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-844316 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8c6ef3e5-2ec1-4b08-b5fb-b798e32cba52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8c6ef3e5-2ec1-4b08-b5fb-b798e32cba52] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004238562s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-844316 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-844316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-844316 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-844316 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-844316 --alsologtostderr -v=3: (11.872533117s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-844316 -n embed-certs-844316
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-844316 -n embed-certs-844316: exit status 7 (82.239422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-844316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-844316 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-844316 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m37.514444604s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-844316 -n embed-certs-844316
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-520015 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e2f5d5d4-a983-4da1-bc31-71af112f7794] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e2f5d5d4-a983-4da1-bc31-71af112f7794] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003009006s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-520015 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-520015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-520015 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-520015 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-520015 --alsologtostderr -v=3: (11.916314098s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-520015 -n old-k8s-version-520015
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-520015 -n old-k8s-version-520015: exit status 7 (85.672642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-520015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (438.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-520015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-520015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m18.534371143s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-520015 -n old-k8s-version-520015
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (438.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-458804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-458804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (55.943134075s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-248142 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-248142 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m7.865620531s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-458804 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [900d369f-1a7c-43fc-a2e7-8d948b180f06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [900d369f-1a7c-43fc-a2e7-8d948b180f06] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003419756s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-458804 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-458804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-458804 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-458804 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-458804 --alsologtostderr -v=3: (11.911203468s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-458804 -n no-preload-458804
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-458804 -n no-preload-458804: exit status 7 (81.580244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-458804 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (588.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-458804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-458804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m48.202239566s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-458804 -n no-preload-458804
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (588.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-248142 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2c6aebcb-22fa-4b4a-a5c5-80a48762e650] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2c6aebcb-22fa-4b4a-a5c5-80a48762e650] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003438783s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-248142 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-248142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-248142 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-248142 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-248142 --alsologtostderr -v=3: (11.915280823s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142: exit status 7 (80.085875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-248142 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-248142 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:46:38.047821  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
E0108 21:47:38.240721  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-248142 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m41.170367425s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-68t7n" [f14273e9-a47e-48c5-88b5-7021e129e1cf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-68t7n" [f14273e9-a47e-48c5-88b5-7021e129e1cf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004081454s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-68t7n" [f14273e9-a47e-48c5-88b5-7021e129e1cf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003404934s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-844316 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-844316 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-844316 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-844316 -n embed-certs-844316
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-844316 -n embed-certs-844316: exit status 2 (307.974292ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-844316 -n embed-certs-844316
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-844316 -n embed-certs-844316: exit status 2 (310.949835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-844316 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-844316 -n embed-certs-844316
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-844316 -n embed-certs-844316
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-875576 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 21:49:39.317981  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
E0108 21:49:41.093502  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/addons-954584/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-875576 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (37.081274063s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-875576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-875576 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-875576 --alsologtostderr -v=3: (1.219107875s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-875576 -n newest-cni-875576
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-875576 -n newest-cni-875576: exit status 7 (80.832172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-875576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-875576 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-875576 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (25.566587428s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-875576 -n newest-cni-875576
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-875576 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-875576 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-875576 -n newest-cni-875576
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-875576 -n newest-cni-875576: exit status 2 (305.862929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-875576 -n newest-cni-875576
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-875576 -n newest-cni-875576: exit status 2 (314.390785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-875576 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-875576 -n newest-cni-875576
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-875576 -n newest-cni-875576
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m9.254685073s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-104214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-104214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c4nbk" [d1076489-c033-4f53-ae76-aa843c896777] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c4nbk" [d1076489-c033-4f53-ae76-aa843c896777] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003573741s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-nmr5w" [3ee9628f-29d9-4493-94a2-7c4d6c4c7f7c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003516292s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-nmr5w" [3ee9628f-29d9-4493-94a2-7c4d6c4c7f7c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004457531s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-520015 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-104214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-520015 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-520015 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-520015 -n old-k8s-version-520015
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-520015 -n old-k8s-version-520015: exit status 2 (355.576748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-520015 -n old-k8s-version-520015
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-520015 -n old-k8s-version-520015: exit status 2 (383.69491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-520015 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-520015 -n old-k8s-version-520015
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-520015 -n old-k8s-version-520015
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.651632174s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.35861176s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zh8f4" [cb1128ff-62cc-4445-822b-15aa442265f2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zh8f4" [cb1128ff-62cc-4445-822b-15aa442265f2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004161617s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zh8f4" [cb1128ff-62cc-4445-822b-15aa442265f2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003547028s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-248142 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4sjdv" [9965bad3-e07c-4b82-99c5-bb992d04f788] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004875381s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-248142 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-248142 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142
E0108 21:52:38.239854  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142: exit status 2 (306.215838ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142: exit status 2 (304.209373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-248142 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-248142 -n default-k8s-diff-port-248142
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-104214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-104214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b45b7" [473ed3f1-89d6-4832-915a-6effd275b022] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b45b7" [473ed3f1-89d6-4832-915a-6effd275b022] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004518736s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (37.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (37.25690676s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (37.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-104214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7strl" [f8873711-4e01-4d52-84ca-3e9034426c2a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004407605s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-104214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-104214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-56zhw" [a8597f52-65f4-48bd-a259-c1b68b2d2928] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-56zhw" [a8597f52-65f4-48bd-a259-c1b68b2d2928] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00461115s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m18.899239883s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-104214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-104214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-104214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7s7nl" [700b49b2-8f75-47be-b7bc-ca76e013ad1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7s7nl" [700b49b2-8f75-47be-b7bc-ca76e013ad1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004644043s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-104214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (65.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m5.101077192s)
--- PASS: TestNetworkPlugins/group/calico/Start (65.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0108 21:53:50.940609  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/old-k8s-version-520015/client.crt: no such file or directory
E0108 21:53:52.221273  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/old-k8s-version-520015/client.crt: no such file or directory
E0108 21:53:54.781607  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/old-k8s-version-520015/client.crt: no such file or directory
E0108 21:53:59.902447  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/old-k8s-version-520015/client.crt: no such file or directory
E0108 21:54:10.143021  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/old-k8s-version-520015/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-104214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.622747333s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-104214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-104214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nqzjx" [857c3c7e-1740-46cb-b26c-8e5933a69719] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:54:30.623676  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/old-k8s-version-520015/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nqzjx" [857c3c7e-1740-46cb-b26c-8e5933a69719] Running
E0108 21:54:39.318243  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/ingress-addon-legacy-177638/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006607846s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-104214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-td95x" [9fc6e3b9-f6b9-48b4-930a-c1e54cac7d57] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005406079s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-104214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-104214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rf6nr" [b7205fb3-463f-4120-a967-9d2ce06d3025] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rf6nr" [b7205fb3-463f-4120-a967-9d2ce06d3025] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003908295s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-104214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-104214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jzj5x" [bd378327-276f-4788-8e53-db12c170cc04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jzj5x" [bd378327-276f-4788-8e53-db12c170cc04] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004609524s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-104214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-104214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-104214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)
E0108 21:55:41.287625  156648 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/functional-727506/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kddnx" [293cf4e4-fd5f-453f-8abb-abc3ecc6b5af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003847568s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kddnx" [293cf4e4-fd5f-453f-8abb-abc3ecc6b5af] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003349196s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-458804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-458804 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-458804 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-458804 -n no-preload-458804
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-458804 -n no-preload-458804: exit status 2 (294.862602ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-458804 -n no-preload-458804
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-458804 -n no-preload-458804: exit status 2 (295.80518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-458804 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-458804 -n no-preload-458804
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-458804 -n no-preload-458804
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                    

Test skip (27/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-624091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-624091
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-104214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-104214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:22 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-922854
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-852462
contexts:
- context:
cluster: kubernetes-upgrade-922854
user: kubernetes-upgrade-922854
name: kubernetes-upgrade-922854
- context:
cluster: pause-852462
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-852462
name: pause-852462
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-922854
user:
client-certificate: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/kubernetes-upgrade-922854/client.crt
client-key: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/kubernetes-upgrade-922854/client.key
- name: pause-852462
user:
client-certificate: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/pause-852462/client.crt
client-key: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/pause-852462/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-104214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104214"

                                                
                                                
----------------------- debugLogs end: kubenet-104214 [took: 3.396768742s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-104214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-104214
--- SKIP: TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-104214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-104214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:22 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-922854
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-150013/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-852462
contexts:
- context:
cluster: kubernetes-upgrade-922854
user: kubernetes-upgrade-922854
name: kubernetes-upgrade-922854
- context:
cluster: pause-852462
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:40:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-852462
name: pause-852462
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-922854
user:
client-certificate: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/kubernetes-upgrade-922854/client.crt
client-key: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/kubernetes-upgrade-922854/client.key
- name: pause-852462
user:
client-certificate: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/pause-852462/client.crt
client-key: /home/jenkins/minikube-integration/17866-150013/.minikube/profiles/pause-852462/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-104214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-104214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104214"

                                                
                                                
----------------------- debugLogs end: cilium-104214 [took: 3.683631015s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-104214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-104214
--- SKIP: TestNetworkPlugins/group/cilium (3.84s)

                                                
                                    
Copied to clipboard