Test Report: Docker_Linux_crio 17206

                    
                      f478b3e95ad7f4002b1f24747b20ea33f6e08bc3:2023-11-28:32057
                    
                

Test fail (6/314)

Order failed test Duration
35 TestAddons/parallel/Ingress 152.67
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 10.1
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 182.25
216 TestMultiNode/serial/PingHostFrom2Pods 3.13
237 TestRunningBinaryUpgrade 73.22
245 TestStoppedBinaryUpgrade/Upgrade 112.05
x
+
TestAddons/parallel/Ingress (152.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-931360 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-931360 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-931360 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6b89227a-1513-4b60-84bd-8536a4586445] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6b89227a-1513-4b60-84bd-8536a4586445] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.029535076s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-931360 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.111668354s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-931360 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-931360 addons disable ingress --alsologtostderr -v=1: (7.612879188s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-931360
helpers_test.go:235: (dbg) docker inspect addons-931360:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "768276b8ed6b009fbef0cba4436e6d391c08de1488d7a17be7c751cf789af39b",
	        "Created": "2023-11-27T23:25:31.600395006Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13044,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:25:31.946547887Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/768276b8ed6b009fbef0cba4436e6d391c08de1488d7a17be7c751cf789af39b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/768276b8ed6b009fbef0cba4436e6d391c08de1488d7a17be7c751cf789af39b/hostname",
	        "HostsPath": "/var/lib/docker/containers/768276b8ed6b009fbef0cba4436e6d391c08de1488d7a17be7c751cf789af39b/hosts",
	        "LogPath": "/var/lib/docker/containers/768276b8ed6b009fbef0cba4436e6d391c08de1488d7a17be7c751cf789af39b/768276b8ed6b009fbef0cba4436e6d391c08de1488d7a17be7c751cf789af39b-json.log",
	        "Name": "/addons-931360",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-931360:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-931360",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bd33b4f2d1a6cc2a62c0eb8f6e6947585fe3c287af1d2f844965025099038fe1-init/diff:/var/lib/docker/overlay2/7130e71395072cd8dcd718fa28933a7b57b5714a10c6614947d04756418543ae/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd33b4f2d1a6cc2a62c0eb8f6e6947585fe3c287af1d2f844965025099038fe1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd33b4f2d1a6cc2a62c0eb8f6e6947585fe3c287af1d2f844965025099038fe1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd33b4f2d1a6cc2a62c0eb8f6e6947585fe3c287af1d2f844965025099038fe1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-931360",
	                "Source": "/var/lib/docker/volumes/addons-931360/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-931360",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-931360",
	                "name.minikube.sigs.k8s.io": "addons-931360",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8174e31c6a9dc5b17df63ecc39b56113bdcb845a2a9fba3dd9982b80363e4e26",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8174e31c6a9d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-931360": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "768276b8ed6b",
	                        "addons-931360"
	                    ],
	                    "NetworkID": "4cca6e91ced6a93249280920dd55e83b2b7e7dbd6628d5f7660aa6f591474b66",
	                    "EndpointID": "defab8fbfa2e749d8d750ad9af5628a9ea6fb6bb40d3ebd23dc819101471b2c5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-931360 -n addons-931360
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-931360 logs -n 25: (1.157837644s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-824886                                                                     | download-only-824886   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| delete  | -p download-only-824886                                                                     | download-only-824886   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| start   | --download-only -p                                                                          | download-docker-379589 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | download-docker-379589                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-379589                                                                   | download-docker-379589 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-447370   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | binary-mirror-447370                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35791                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-447370                                                                     | binary-mirror-447370   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | addons-931360                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | addons-931360                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-931360 --wait=true                                                                | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | -p addons-931360                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-931360 ssh cat                                                                       | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | /opt/local-path-provisioner/pvc-063e1186-7680-47f4-926d-164851142721_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-931360 addons disable                                                                | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-931360 addons                                                                        | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-931360 ip                                                                            | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	| addons  | addons-931360 addons disable                                                                | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | addons-931360                                                                               |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | addons-931360                                                                               |                        |         |         |                     |                     |
	| addons  | addons-931360 addons disable                                                                | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC | 27 Nov 23 23:27 UTC |
	|         | -p addons-931360                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-931360 ssh curl -s                                                                   | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:27 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-931360 addons                                                                        | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-931360 addons                                                                        | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-931360 ip                                                                            | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	| addons  | addons-931360 addons disable                                                                | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-931360 addons disable                                                                | addons-931360          | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:25:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:25:09.013747   12370 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:25:09.013979   12370 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:09.013987   12370 out.go:309] Setting ErrFile to fd 2...
	I1127 23:25:09.013992   12370 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:09.014207   12370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:25:09.014814   12370 out.go:303] Setting JSON to false
	I1127 23:25:09.015582   12370 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":461,"bootTime":1701127048,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:25:09.015640   12370 start.go:138] virtualization: kvm guest
	I1127 23:25:09.017764   12370 out.go:177] * [addons-931360] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:25:09.019135   12370 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:25:09.020535   12370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:25:09.019219   12370 notify.go:220] Checking for updates...
	I1127 23:25:09.023143   12370 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:25:09.024332   12370 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:25:09.025684   12370 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:25:09.027085   12370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:25:09.028731   12370 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:25:09.049329   12370 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:25:09.049435   12370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:25:09.101625   12370 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2023-11-27 23:25:09.093806303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:25:09.101741   12370 docker.go:295] overlay module found
	I1127 23:25:09.103508   12370 out.go:177] * Using the docker driver based on user configuration
	I1127 23:25:09.104920   12370 start.go:298] selected driver: docker
	I1127 23:25:09.104931   12370 start.go:902] validating driver "docker" against <nil>
	I1127 23:25:09.104941   12370 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:25:09.105640   12370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:25:09.153105   12370 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2023-11-27 23:25:09.145578974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:25:09.153289   12370 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:25:09.153520   12370 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:25:09.155138   12370 out.go:177] * Using Docker driver with root privileges
	I1127 23:25:09.156540   12370 cni.go:84] Creating CNI manager for ""
	I1127 23:25:09.156561   12370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:25:09.156572   12370 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:25:09.156582   12370 start_flags.go:323] config:
	{Name:addons-931360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-931360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:25:09.157992   12370 out.go:177] * Starting control plane node addons-931360 in cluster addons-931360
	I1127 23:25:09.159191   12370 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:25:09.160502   12370 out.go:177] * Pulling base image ...
	I1127 23:25:09.161731   12370 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:25:09.161757   12370 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:25:09.161763   12370 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:25:09.161867   12370 cache.go:56] Caching tarball of preloaded images
	I1127 23:25:09.161935   12370 preload.go:174] Found /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 23:25:09.161945   12370 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:25:09.162280   12370 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/config.json ...
	I1127 23:25:09.162303   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/config.json: {Name:mk12963de5a810f56aaf0edcf331cb82078d7f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:09.176824   12370 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:25:09.176942   12370 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:25:09.176957   12370 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:25:09.176961   12370 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:25:09.176968   12370 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:25:09.176975   12370 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from local cache
	I1127 23:25:22.503015   12370 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from cached tarball
	I1127 23:25:22.503054   12370 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:25:22.503102   12370 start.go:365] acquiring machines lock for addons-931360: {Name:mk14732bb2ecded3665f4ff8ae06c3c27f081b50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:25:22.503220   12370 start.go:369] acquired machines lock for "addons-931360" in 94.867µs
	I1127 23:25:22.503251   12370 start.go:93] Provisioning new machine with config: &{Name:addons-931360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-931360 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:25:22.503350   12370 start.go:125] createHost starting for "" (driver="docker")
	I1127 23:25:22.587225   12370 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1127 23:25:22.587541   12370 start.go:159] libmachine.API.Create for "addons-931360" (driver="docker")
	I1127 23:25:22.587586   12370 client.go:168] LocalClient.Create starting
	I1127 23:25:22.587733   12370 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem
	I1127 23:25:22.689770   12370 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem
	I1127 23:25:22.898523   12370 cli_runner.go:164] Run: docker network inspect addons-931360 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 23:25:22.914436   12370 cli_runner.go:211] docker network inspect addons-931360 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 23:25:22.914504   12370 network_create.go:281] running [docker network inspect addons-931360] to gather additional debugging logs...
	I1127 23:25:22.914523   12370 cli_runner.go:164] Run: docker network inspect addons-931360
	W1127 23:25:22.930707   12370 cli_runner.go:211] docker network inspect addons-931360 returned with exit code 1
	I1127 23:25:22.930734   12370 network_create.go:284] error running [docker network inspect addons-931360]: docker network inspect addons-931360: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-931360 not found
	I1127 23:25:22.930745   12370 network_create.go:286] output of [docker network inspect addons-931360]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-931360 not found
	
	** /stderr **
	I1127 23:25:22.930820   12370 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:25:22.947377   12370 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027074a0}
	I1127 23:25:22.947416   12370 network_create.go:124] attempt to create docker network addons-931360 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1127 23:25:22.947461   12370 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-931360 addons-931360
	I1127 23:25:23.151813   12370 network_create.go:108] docker network addons-931360 192.168.49.0/24 created
	I1127 23:25:23.151853   12370 kic.go:121] calculated static IP "192.168.49.2" for the "addons-931360" container
	I1127 23:25:23.151942   12370 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:25:23.166479   12370 cli_runner.go:164] Run: docker volume create addons-931360 --label name.minikube.sigs.k8s.io=addons-931360 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:25:23.212388   12370 oci.go:103] Successfully created a docker volume addons-931360
	I1127 23:25:23.212466   12370 cli_runner.go:164] Run: docker run --rm --name addons-931360-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-931360 --entrypoint /usr/bin/test -v addons-931360:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:25:26.384742   12370 cli_runner.go:217] Completed: docker run --rm --name addons-931360-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-931360 --entrypoint /usr/bin/test -v addons-931360:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (3.172232364s)
	I1127 23:25:26.384776   12370 oci.go:107] Successfully prepared a docker volume addons-931360
	I1127 23:25:26.384802   12370 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:25:26.384821   12370 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:25:26.384873   12370 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-931360:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:25:31.535705   12370 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-931360:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.150799148s)
	I1127 23:25:31.535735   12370 kic.go:203] duration metric: took 5.150911 seconds to extract preloaded images to volume
	W1127 23:25:31.535899   12370 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:25:31.535992   12370 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:25:31.584712   12370 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-931360 --name addons-931360 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-931360 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-931360 --network addons-931360 --ip 192.168.49.2 --volume addons-931360:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:25:31.954478   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Running}}
	I1127 23:25:31.971047   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:31.987942   12370 cli_runner.go:164] Run: docker exec addons-931360 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:25:32.050711   12370 oci.go:144] the created container "addons-931360" has a running status.
	I1127 23:25:32.050739   12370 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa...
	I1127 23:25:32.290768   12370 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:25:32.319234   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:32.337639   12370 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:25:32.337657   12370 kic_runner.go:114] Args: [docker exec --privileged addons-931360 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:25:32.448094   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:32.467355   12370 machine.go:88] provisioning docker machine ...
	I1127 23:25:32.467397   12370 ubuntu.go:169] provisioning hostname "addons-931360"
	I1127 23:25:32.467457   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:32.487241   12370 main.go:141] libmachine: Using SSH client type: native
	I1127 23:25:32.487728   12370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 23:25:32.487750   12370 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-931360 && echo "addons-931360" | sudo tee /etc/hostname
	I1127 23:25:32.668894   12370 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-931360
	
	I1127 23:25:32.668959   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:32.684689   12370 main.go:141] libmachine: Using SSH client type: native
	I1127 23:25:32.685144   12370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 23:25:32.685171   12370 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-931360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-931360/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-931360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:25:32.805831   12370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:25:32.805864   12370 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4554/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4554/.minikube}
	I1127 23:25:32.805879   12370 ubuntu.go:177] setting up certificates
	I1127 23:25:32.805886   12370 provision.go:83] configureAuth start
	I1127 23:25:32.805928   12370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-931360
	I1127 23:25:32.822500   12370 provision.go:138] copyHostCerts
	I1127 23:25:32.822567   12370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem (1078 bytes)
	I1127 23:25:32.822674   12370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem (1123 bytes)
	I1127 23:25:32.822767   12370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem (1679 bytes)
	I1127 23:25:32.822836   12370 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem org=jenkins.addons-931360 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-931360]
	I1127 23:25:32.887745   12370 provision.go:172] copyRemoteCerts
	I1127 23:25:32.887806   12370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:25:32.887837   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:32.904069   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:32.994229   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:25:33.015043   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:25:33.035413   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1127 23:25:33.055898   12370 provision.go:86] duration metric: configureAuth took 250.001758ms
	I1127 23:25:33.055925   12370 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:25:33.056100   12370 config.go:182] Loaded profile config "addons-931360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:25:33.056199   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:33.071548   12370 main.go:141] libmachine: Using SSH client type: native
	I1127 23:25:33.071876   12370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 23:25:33.071892   12370 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:25:33.280816   12370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:25:33.280840   12370 machine.go:91] provisioned docker machine in 813.457854ms
	I1127 23:25:33.280850   12370 client.go:171] LocalClient.Create took 10.693255363s
	I1127 23:25:33.280871   12370 start.go:167] duration metric: libmachine.API.Create for "addons-931360" took 10.693334926s
	I1127 23:25:33.280881   12370 start.go:300] post-start starting for "addons-931360" (driver="docker")
	I1127 23:25:33.280897   12370 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:25:33.280964   12370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:25:33.281013   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:33.297453   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:33.386276   12370 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:25:33.389243   12370 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:25:33.389289   12370 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:25:33.389307   12370 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:25:33.389316   12370 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:25:33.389332   12370 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/addons for local assets ...
	I1127 23:25:33.389393   12370 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/files for local assets ...
	I1127 23:25:33.389417   12370 start.go:303] post-start completed in 108.529089ms
	I1127 23:25:33.389689   12370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-931360
	I1127 23:25:33.405102   12370 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/config.json ...
	I1127 23:25:33.405343   12370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:25:33.405397   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:33.421156   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:33.506730   12370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:25:33.510704   12370 start.go:128] duration metric: createHost completed in 11.007340191s
	I1127 23:25:33.510730   12370 start.go:83] releasing machines lock for "addons-931360", held for 11.007496136s
	I1127 23:25:33.510801   12370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-931360
	I1127 23:25:33.527051   12370 ssh_runner.go:195] Run: cat /version.json
	I1127 23:25:33.527099   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:33.527175   12370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:25:33.527229   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:33.544541   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:33.544727   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:33.629859   12370 ssh_runner.go:195] Run: systemctl --version
	I1127 23:25:33.717001   12370 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:25:33.853252   12370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:25:33.857171   12370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:25:33.874279   12370 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:25:33.874353   12370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:25:33.899512   12370 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:25:33.899540   12370 start.go:472] detecting cgroup driver to use...
	I1127 23:25:33.899570   12370 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:25:33.899637   12370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:25:33.912267   12370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:25:33.921374   12370 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:25:33.921428   12370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:25:33.932661   12370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:25:33.944728   12370 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:25:34.014474   12370 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:25:34.094788   12370 docker.go:219] disabling docker service ...
	I1127 23:25:34.094851   12370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:25:34.110935   12370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:25:34.120860   12370 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:25:34.194024   12370 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:25:34.262789   12370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:25:34.272865   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:25:34.286825   12370 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:25:34.286874   12370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:25:34.295841   12370 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:25:34.295917   12370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:25:34.304673   12370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:25:34.312916   12370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:25:34.321068   12370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:25:34.328814   12370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:25:34.335993   12370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:25:34.342972   12370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:25:34.421127   12370 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:25:34.530287   12370 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:25:34.530354   12370 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:25:34.534258   12370 start.go:540] Will wait 60s for crictl version
	I1127 23:25:34.534351   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:25:34.537262   12370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:25:34.569495   12370 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 23:25:34.569604   12370 ssh_runner.go:195] Run: crio --version
	I1127 23:25:34.602719   12370 ssh_runner.go:195] Run: crio --version
	I1127 23:25:34.636152   12370 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 23:25:34.637766   12370 cli_runner.go:164] Run: docker network inspect addons-931360 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:25:34.653754   12370 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1127 23:25:34.657112   12370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:25:34.666531   12370 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:25:34.666577   12370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:25:34.717300   12370 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:25:34.717330   12370 crio.go:415] Images already preloaded, skipping extraction
	I1127 23:25:34.717396   12370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:25:34.747613   12370 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:25:34.747637   12370 cache_images.go:84] Images are preloaded, skipping loading
	I1127 23:25:34.747699   12370 ssh_runner.go:195] Run: crio config
	I1127 23:25:34.787988   12370 cni.go:84] Creating CNI manager for ""
	I1127 23:25:34.788012   12370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:25:34.788035   12370 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:25:34.788072   12370 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-931360 NodeName:addons-931360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:25:34.788252   12370 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-931360"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:25:34.788346   12370 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-931360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-931360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:25:34.788407   12370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:25:34.796277   12370 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:25:34.796333   12370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:25:34.803738   12370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1127 23:25:34.818390   12370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:25:34.833100   12370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1127 23:25:34.847979   12370 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:25:34.851110   12370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:25:34.860189   12370 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360 for IP: 192.168.49.2
	I1127 23:25:34.860215   12370 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a5db8f506dfbef3cb84c722632fd59c37603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:34.860312   12370 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key
	I1127 23:25:35.122251   12370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt ...
	I1127 23:25:35.122282   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt: {Name:mkfc959114f28543c93ad6c4395ead64c0e87a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.122453   12370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key ...
	I1127 23:25:35.122464   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key: {Name:mk0baf05e3af48668502e8c83480405e42852c81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.122528   12370 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key
	I1127 23:25:35.180769   12370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt ...
	I1127 23:25:35.180794   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt: {Name:mk657d39bd4e1e635963b9620f6b9326f8baeb51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.180931   12370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key ...
	I1127 23:25:35.180941   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key: {Name:mk2c519c40e4d4d17387bbb54bde9bee5e9af402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.181037   12370 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.key
	I1127 23:25:35.181050   12370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt with IP's: []
	I1127 23:25:35.279688   12370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt ...
	I1127 23:25:35.279716   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: {Name:mke8942cb2cf3deff6a1db181af14acd6e1b58f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.279867   12370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.key ...
	I1127 23:25:35.279877   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.key: {Name:mk33e31e36a9688d9a5eb2dfd171c6286ad9a503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.279940   12370 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.key.dd3b5fb2
	I1127 23:25:35.279956   12370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:25:35.447085   12370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.crt.dd3b5fb2 ...
	I1127 23:25:35.447115   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.crt.dd3b5fb2: {Name:mk366b4ef60291d128865844bf53676c88d67dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.447262   12370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.key.dd3b5fb2 ...
	I1127 23:25:35.447274   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.key.dd3b5fb2: {Name:mkb9641a4b496dcc462b83a7464ebc405e9aadc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.447335   12370 certs.go:337] copying /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.crt
	I1127 23:25:35.447403   12370 certs.go:341] copying /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.key
	I1127 23:25:35.447451   12370 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/proxy-client.key
	I1127 23:25:35.447466   12370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/proxy-client.crt with IP's: []
	I1127 23:25:35.758678   12370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/proxy-client.crt ...
	I1127 23:25:35.758728   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/proxy-client.crt: {Name:mk9f622ec25afc422e2548cd9133eae9abb7fed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.758900   12370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/proxy-client.key ...
	I1127 23:25:35.758911   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/proxy-client.key: {Name:mkb71b39674af8c545f12fe7f36fa9f690c57188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:35.759120   12370 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:25:35.759161   12370 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:25:35.759190   12370 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:25:35.759220   12370 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem (1679 bytes)
	I1127 23:25:35.759805   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:25:35.781031   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:25:35.802096   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:25:35.822488   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 23:25:35.843015   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:25:35.863219   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 23:25:35.883495   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:25:35.903962   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 23:25:35.924120   12370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:25:35.944213   12370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:25:35.959266   12370 ssh_runner.go:195] Run: openssl version
	I1127 23:25:35.964188   12370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:25:35.972199   12370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:25:35.975169   12370 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:25 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:25:35.975225   12370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:25:35.981236   12370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:25:35.989847   12370 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:25:35.992796   12370 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:25:35.992844   12370 kubeadm.go:404] StartCluster: {Name:addons-931360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-931360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:25:35.992905   12370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:25:35.992944   12370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:25:36.025271   12370 cri.go:89] found id: ""
	I1127 23:25:36.025348   12370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:25:36.033242   12370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:25:36.040997   12370 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 23:25:36.041068   12370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:25:36.048861   12370 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:25:36.048900   12370 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 23:25:36.091540   12370 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 23:25:36.091637   12370 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:25:36.125479   12370 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:25:36.125587   12370 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 23:25:36.125657   12370 kubeadm.go:322] OS: Linux
	I1127 23:25:36.125727   12370 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 23:25:36.125782   12370 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 23:25:36.125843   12370 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 23:25:36.125916   12370 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 23:25:36.126008   12370 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 23:25:36.126078   12370 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 23:25:36.126151   12370 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1127 23:25:36.126233   12370 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1127 23:25:36.126274   12370 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1127 23:25:36.184755   12370 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:25:36.184869   12370 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:25:36.184959   12370 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:25:36.371093   12370 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:25:36.375043   12370 out.go:204]   - Generating certificates and keys ...
	I1127 23:25:36.375184   12370 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:25:36.375287   12370 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:25:36.486165   12370 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:25:36.603348   12370 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:25:36.720629   12370 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:25:36.809442   12370 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:25:36.936557   12370 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:25:36.936701   12370 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-931360 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:25:37.003541   12370 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:25:37.003739   12370 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-931360 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:25:37.277082   12370 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:25:37.389258   12370 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:25:37.492429   12370 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:25:37.492546   12370 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:25:37.786866   12370 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:25:38.007285   12370 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:25:38.256133   12370 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:25:38.379303   12370 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:25:38.379778   12370 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:25:38.382093   12370 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:25:38.384565   12370 out.go:204]   - Booting up control plane ...
	I1127 23:25:38.384747   12370 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:25:38.384882   12370 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:25:38.384982   12370 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:25:38.392977   12370 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:25:38.393639   12370 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:25:38.393718   12370 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:25:38.466275   12370 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:25:43.468410   12370 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002217 seconds
	I1127 23:25:43.468520   12370 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:25:43.479852   12370 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:25:44.000368   12370 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:25:44.000581   12370 kubeadm.go:322] [mark-control-plane] Marking the node addons-931360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:25:44.509026   12370 kubeadm.go:322] [bootstrap-token] Using token: avgxqb.7k3c3neg3j7bvr8n
	I1127 23:25:44.510537   12370 out.go:204]   - Configuring RBAC rules ...
	I1127 23:25:44.510640   12370 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:25:44.514095   12370 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:25:44.520271   12370 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:25:44.522832   12370 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:25:44.525465   12370 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:25:44.528059   12370 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:25:44.537395   12370 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:25:44.762198   12370 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:25:44.947989   12370 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:25:44.949136   12370 kubeadm.go:322] 
	I1127 23:25:44.949234   12370 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:25:44.949250   12370 kubeadm.go:322] 
	I1127 23:25:44.949340   12370 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:25:44.949354   12370 kubeadm.go:322] 
	I1127 23:25:44.949385   12370 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:25:44.949458   12370 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:25:44.949535   12370 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:25:44.949552   12370 kubeadm.go:322] 
	I1127 23:25:44.949648   12370 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 23:25:44.949688   12370 kubeadm.go:322] 
	I1127 23:25:44.949744   12370 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:25:44.949754   12370 kubeadm.go:322] 
	I1127 23:25:44.949819   12370 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:25:44.949912   12370 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:25:44.950015   12370 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:25:44.950028   12370 kubeadm.go:322] 
	I1127 23:25:44.950148   12370 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:25:44.950245   12370 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:25:44.950256   12370 kubeadm.go:322] 
	I1127 23:25:44.950364   12370 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token avgxqb.7k3c3neg3j7bvr8n \
	I1127 23:25:44.950535   12370 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 \
	I1127 23:25:44.950566   12370 kubeadm.go:322] 	--control-plane 
	I1127 23:25:44.950575   12370 kubeadm.go:322] 
	I1127 23:25:44.950680   12370 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:25:44.950745   12370 kubeadm.go:322] 
	I1127 23:25:44.950888   12370 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token avgxqb.7k3c3neg3j7bvr8n \
	I1127 23:25:44.951038   12370 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 
	I1127 23:25:44.953498   12370 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 23:25:44.953676   12370 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:25:44.953714   12370 cni.go:84] Creating CNI manager for ""
	I1127 23:25:44.953731   12370 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:25:44.955290   12370 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 23:25:44.957539   12370 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:25:44.961677   12370 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 23:25:44.961701   12370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:25:44.977713   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:25:45.659682   12370 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:25:45.659754   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=addons-931360 minikube.k8s.io/updated_at=2023_11_27T23_25_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:45.659761   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:45.666989   12370 ops.go:34] apiserver oom_adj: -16
	I1127 23:25:45.758828   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:45.821228   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:46.387823   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:46.887711   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:47.387934   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:47.887356   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:48.388097   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:48.887354   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:49.388274   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:49.887271   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:50.387807   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:50.887657   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:51.387525   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:51.887826   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:52.387196   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:52.888119   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:53.388236   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:53.888131   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:54.387783   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:54.888115   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:55.387825   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:55.888018   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:56.387429   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:56.887296   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:57.388020   12370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:25:57.449501   12370 kubeadm.go:1081] duration metric: took 11.789817291s to wait for elevateKubeSystemPrivileges.
	I1127 23:25:57.449532   12370 kubeadm.go:406] StartCluster complete in 21.456690951s
	I1127 23:25:57.449549   12370 settings.go:142] acquiring lock: {Name:mk8cf64b397eda9c03dbd178fc3aefd4ce90283a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:57.449647   12370 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:25:57.450029   12370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/kubeconfig: {Name:mkeacc22f444b1cc5befda4f2c22a9fc66e858ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:57.450225   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:25:57.450312   12370 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1127 23:25:57.450478   12370 addons.go:69] Setting volumesnapshots=true in profile "addons-931360"
	I1127 23:25:57.450505   12370 addons.go:231] Setting addon volumesnapshots=true in "addons-931360"
	I1127 23:25:57.450510   12370 addons.go:69] Setting default-storageclass=true in profile "addons-931360"
	I1127 23:25:57.450526   12370 config.go:182] Loaded profile config "addons-931360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:25:57.450540   12370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-931360"
	I1127 23:25:57.450544   12370 addons.go:69] Setting gcp-auth=true in profile "addons-931360"
	I1127 23:25:57.450562   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.450546   12370 addons.go:69] Setting helm-tiller=true in profile "addons-931360"
	I1127 23:25:57.450563   12370 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-931360"
	I1127 23:25:57.450580   12370 addons.go:231] Setting addon helm-tiller=true in "addons-931360"
	I1127 23:25:57.450590   12370 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-931360"
	I1127 23:25:57.450589   12370 addons.go:69] Setting ingress=true in profile "addons-931360"
	I1127 23:25:57.450600   12370 addons.go:69] Setting inspektor-gadget=true in profile "addons-931360"
	I1127 23:25:57.450624   12370 addons.go:231] Setting addon ingress=true in "addons-931360"
	I1127 23:25:57.450627   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.450619   12370 addons.go:231] Setting addon inspektor-gadget=true in "addons-931360"
	I1127 23:25:57.450637   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.450676   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.450683   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.450690   12370 addons.go:69] Setting metrics-server=true in profile "addons-931360"
	I1127 23:25:57.450703   12370 addons.go:231] Setting addon metrics-server=true in "addons-931360"
	I1127 23:25:57.450715   12370 addons.go:69] Setting cloud-spanner=true in profile "addons-931360"
	I1127 23:25:57.450726   12370 addons.go:231] Setting addon cloud-spanner=true in "addons-931360"
	I1127 23:25:57.450739   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.450763   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.451108   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451120   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451130   12370 addons.go:69] Setting storage-provisioner=true in profile "addons-931360"
	I1127 23:25:57.451142   12370 addons.go:231] Setting addon storage-provisioner=true in "addons-931360"
	I1127 23:25:57.451147   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451178   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.451184   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451194   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451265   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451300   12370 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-931360"
	I1127 23:25:57.451337   12370 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-931360"
	I1127 23:25:57.451384   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.451119   12370 addons.go:69] Setting registry=true in profile "addons-931360"
	I1127 23:25:57.451610   12370 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-931360"
	I1127 23:25:57.451625   12370 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-931360"
	I1127 23:25:57.451841   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451862   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.450515   12370 addons.go:69] Setting ingress-dns=true in profile "addons-931360"
	I1127 23:25:57.451909   12370 addons.go:231] Setting addon ingress-dns=true in "addons-931360"
	I1127 23:25:57.451954   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.452391   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451108   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.451627   12370 addons.go:231] Setting addon registry=true in "addons-931360"
	I1127 23:25:57.465358   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.465923   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.450573   12370 mustload.go:65] Loading cluster: addons-931360
	I1127 23:25:57.451108   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.469323   12370 config.go:182] Loaded profile config "addons-931360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:25:57.451599   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.469601   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.485943   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1127 23:25:57.488871   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1127 23:25:57.490604   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1127 23:25:57.493946   12370 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1127 23:25:57.495724   12370 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1127 23:25:57.495741   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1127 23:25:57.495795   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.493903   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1127 23:25:57.498575   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1127 23:25:57.500559   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1127 23:25:57.501913   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1127 23:25:57.503170   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1127 23:25:57.504740   12370 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1127 23:25:57.504760   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1127 23:25:57.504817   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.504576   12370 addons.go:231] Setting addon default-storageclass=true in "addons-931360"
	I1127 23:25:57.505510   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.506557   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.510391   12370 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:25:57.525124   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.511214   12370 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-931360"
	I1127 23:25:57.521616   12370 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-931360" context rescaled to 1 replicas
	I1127 23:25:57.527290   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.528299   12370 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1127 23:25:57.528670   12370 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1127 23:25:57.528713   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:25:57.530284   12370 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1127 23:25:57.530291   12370 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1127 23:25:57.530315   12370 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:25:57.531419   12370 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1127 23:25:57.531426   12370 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1127 23:25:57.532410   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:25:57.533933   12370 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1127 23:25:57.537692   12370 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:25:57.535811   12370 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 23:25:57.535852   12370 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 23:25:57.536035   12370 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1127 23:25:57.536272   12370 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:25:57.536430   12370 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1127 23:25:57.536510   12370 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 23:25:57.536519   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1127 23:25:57.536519   12370 out.go:177] * Verifying Kubernetes components...
	I1127 23:25:57.543145   12370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:25:57.544967   12370 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1127 23:25:57.544979   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1127 23:25:57.545017   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.541610   12370 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 23:25:57.547051   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1127 23:25:57.547106   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.541620   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1127 23:25:57.547393   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.541627   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1127 23:25:57.547636   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.541633   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1127 23:25:57.550262   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.552685   12370 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:25:57.552704   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:25:57.552759   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.541642   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 23:25:57.553185   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.553457   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.541688   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.544104   12370 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:25:57.554451   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:25:57.554504   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.576853   12370 out.go:177]   - Using image docker.io/registry:2.8.3
	I1127 23:25:57.578110   12370 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1127 23:25:57.577758   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.581515   12370 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1127 23:25:57.579798   12370 out.go:177]   - Using image docker.io/busybox:stable
	I1127 23:25:57.585261   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.585481   12370 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1127 23:25:57.585496   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1127 23:25:57.585548   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.587674   12370 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 23:25:57.587694   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1127 23:25:57.587755   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:25:57.591467   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.600008   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.603513   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.611528   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.613071   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.614867   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.620714   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.624451   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:25:57.628990   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	W1127 23:25:57.646649   12370 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1127 23:25:57.646694   12370 retry.go:31] will retry after 134.383594ms: ssh: handshake failed: EOF
	I1127 23:25:57.653254   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:25:57.654365   12370 node_ready.go:35] waiting up to 6m0s for node "addons-931360" to be "Ready" ...
	I1127 23:25:57.844986   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1127 23:25:57.860745   12370 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1127 23:25:57.860777   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1127 23:25:57.958572   12370 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 23:25:57.958597   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1127 23:25:58.046445   12370 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1127 23:25:58.046472   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1127 23:25:58.059148   12370 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1127 23:25:58.059393   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1127 23:25:58.059355   12370 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1127 23:25:58.059522   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1127 23:25:58.062584   12370 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1127 23:25:58.062608   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1127 23:25:58.144889   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 23:25:58.155667   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:25:58.161667   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 23:25:58.161685   12370 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 23:25:58.161705   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 23:25:58.246530   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:25:58.250819   12370 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1127 23:25:58.250850   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1127 23:25:58.345767   12370 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1127 23:25:58.345846   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1127 23:25:58.349738   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 23:25:58.351019   12370 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1127 23:25:58.351084   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1127 23:25:58.353992   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 23:25:58.359122   12370 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1127 23:25:58.359147   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1127 23:25:58.445863   12370 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1127 23:25:58.445897   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1127 23:25:58.449758   12370 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 23:25:58.449790   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 23:25:58.552188   12370 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1127 23:25:58.552273   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1127 23:25:58.554344   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1127 23:25:58.651837   12370 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1127 23:25:58.651933   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1127 23:25:58.743616   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 23:25:58.748437   12370 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1127 23:25:58.748518   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1127 23:25:58.758171   12370 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1127 23:25:58.758212   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1127 23:25:58.944376   12370 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1127 23:25:58.944463   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1127 23:25:58.959004   12370 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1127 23:25:58.959089   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1127 23:25:59.246569   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1127 23:25:59.256781   12370 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1127 23:25:59.256810   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1127 23:25:59.348378   12370 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1127 23:25:59.348479   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1127 23:25:59.544811   12370 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1127 23:25:59.544837   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1127 23:25:59.555694   12370 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.902397377s)
	I1127 23:25:59.555841   12370 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1127 23:25:59.655598   12370 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:25:59.655686   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1127 23:25:59.761383   12370 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1127 23:25:59.761463   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1127 23:25:59.957829   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:26:00.146017   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:00.154705   12370 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1127 23:26:00.154774   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1127 23:26:00.253341   12370 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1127 23:26:00.253423   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1127 23:26:00.551687   12370 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 23:26:00.551768   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1127 23:26:00.664784   12370 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1127 23:26:00.664816   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1127 23:26:00.851230   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 23:26:01.048635   12370 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 23:26:01.048667   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1127 23:26:01.556973   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 23:26:02.158775   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.313744206s)
	I1127 23:26:02.358670   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:03.949054   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.804080589s)
	I1127 23:26:03.949090   12370 addons.go:467] Verifying addon ingress=true in "addons-931360"
	I1127 23:26:03.949121   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.793402801s)
	I1127 23:26:03.951044   12370 out.go:177] * Verifying ingress addon...
	I1127 23:26:03.949463   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.787764991s)
	I1127 23:26:03.949524   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.70290392s)
	I1127 23:26:03.949552   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.599746392s)
	I1127 23:26:03.949591   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.595544141s)
	I1127 23:26:03.949631   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.39518364s)
	I1127 23:26:03.949702   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.20600424s)
	I1127 23:26:03.949741   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.703076312s)
	I1127 23:26:03.949851   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.991927506s)
	I1127 23:26:03.949917   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.098652748s)
	I1127 23:26:03.951131   12370 addons.go:467] Verifying addon registry=true in "addons-931360"
	I1127 23:26:03.952763   12370 out.go:177] * Verifying registry addon...
	I1127 23:26:03.951299   12370 addons.go:467] Verifying addon metrics-server=true in "addons-931360"
	W1127 23:26:03.951334   12370 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 23:26:03.953584   12370 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1127 23:26:03.954159   12370 retry.go:31] will retry after 280.460702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 23:26:03.954895   12370 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1127 23:26:03.956401   12370 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1127 23:26:03.958693   12370 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 23:26:03.958709   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:03.959028   12370 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1127 23:26:03.959048   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:03.961301   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:03.961836   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:04.235596   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:26:04.347980   12370 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1127 23:26:04.348069   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:26:04.366895   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:26:04.474982   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:04.478554   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:04.561735   12370 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1127 23:26:04.642568   12370 addons.go:231] Setting addon gcp-auth=true in "addons-931360"
	I1127 23:26:04.642631   12370 host.go:66] Checking if "addons-931360" exists ...
	I1127 23:26:04.643230   12370 cli_runner.go:164] Run: docker container inspect addons-931360 --format={{.State.Status}}
	I1127 23:26:04.668400   12370 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1127 23:26:04.668467   12370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-931360
	I1127 23:26:04.672305   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.11524608s)
	I1127 23:26:04.672349   12370 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-931360"
	I1127 23:26:04.674092   12370 out.go:177] * Verifying csi-hostpath-driver addon...
	I1127 23:26:04.676691   12370 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1127 23:26:04.687568   12370 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 23:26:04.687592   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:04.691274   12370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/addons-931360/id_rsa Username:docker}
	I1127 23:26:04.691505   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:04.766746   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:04.964945   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:04.965772   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:05.195570   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:05.237336   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.001693698s)
	I1127 23:26:05.240259   12370 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:26:05.241922   12370 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1127 23:26:05.243386   12370 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1127 23:26:05.243404   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1127 23:26:05.259354   12370 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1127 23:26:05.259381   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1127 23:26:05.275584   12370 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 23:26:05.275603   12370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1127 23:26:05.291820   12370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 23:26:05.466166   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:05.466510   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:05.745984   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:05.967476   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:05.968585   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:06.250580   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:06.343499   12370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.051633004s)
	I1127 23:26:06.344498   12370 addons.go:467] Verifying addon gcp-auth=true in "addons-931360"
	I1127 23:26:06.346436   12370 out.go:177] * Verifying gcp-auth addon...
	I1127 23:26:06.348893   12370 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1127 23:26:06.352171   12370 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1127 23:26:06.352188   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:06.354719   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:06.466003   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:06.466652   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:06.746113   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:06.858743   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:06.966686   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:06.967262   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:07.245111   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:07.266751   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:07.358764   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:07.466518   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:07.466833   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:07.746135   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:07.859224   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:07.970574   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:07.970810   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:08.195560   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:08.358000   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:08.465590   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:08.465705   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:08.696032   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:08.858818   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:08.965211   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:08.965519   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:09.195603   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:09.267412   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:09.358382   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:09.465533   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:09.465967   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:09.696471   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:09.858545   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:09.965209   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:09.965810   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:10.196118   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:10.357917   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:10.465862   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:10.465941   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:10.695767   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:10.857913   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:10.965046   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:10.965137   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:11.195829   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:11.357573   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:11.465725   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:11.466396   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:11.695504   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:11.766636   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:11.858498   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:11.965719   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:11.966214   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:12.194935   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:12.357911   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:12.465032   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:12.465208   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:12.695852   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:12.857666   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:12.965303   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:12.965868   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:13.196198   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:13.358517   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:13.465708   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:13.465978   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:13.695461   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:13.766820   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:13.858342   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:13.971361   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:13.971538   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:14.195639   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:14.358993   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:14.465393   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:14.465415   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:14.696238   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:14.858301   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:14.965318   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:14.965501   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:15.196081   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:15.358032   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:15.465462   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:15.465530   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:15.695463   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:15.858267   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:15.965494   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:15.965673   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:16.196370   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:16.266586   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:16.358437   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:16.467016   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:16.468241   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:16.695300   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:16.858158   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:16.965401   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:16.965635   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:17.195961   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:17.357957   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:17.465747   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:17.465949   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:17.695174   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:17.857997   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:17.965312   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:17.965610   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:18.195103   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:18.358334   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:18.465524   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:18.465714   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:18.696352   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:18.766542   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:18.858253   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:18.965683   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:18.966114   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:19.195794   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:19.357690   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:19.465466   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:19.465549   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:19.696201   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:19.858435   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:19.966047   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:19.966304   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:20.195696   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:20.358582   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:20.465028   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:20.465823   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:20.696001   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:20.858126   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:20.965768   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:20.965864   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:21.195280   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:21.266821   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:21.358340   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:21.465685   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:21.465831   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:21.695825   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:21.857742   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:21.965616   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:21.965814   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:22.196556   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:22.358730   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:22.465319   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:22.465529   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:22.695870   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:22.858778   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:22.965527   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:22.965747   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:23.195303   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:23.266858   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:23.358653   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:23.465143   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:23.465558   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:23.695588   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:23.858614   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:23.964811   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:23.965720   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:24.195060   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:24.358170   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:24.465449   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:24.465965   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:24.696198   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:24.858232   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:24.965347   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:24.965684   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:25.195820   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:25.358146   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:25.465522   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:25.465854   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:25.696073   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:25.766456   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:25.858612   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:25.964943   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:25.965508   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:26.195568   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:26.357573   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:26.465383   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:26.466494   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:26.695383   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:26.863893   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:26.965286   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:26.965690   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:27.195521   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:27.358748   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:27.465329   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:27.465519   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:27.696079   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:27.858191   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:27.965659   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:27.965902   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:28.195354   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:28.266663   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:28.358474   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:28.466210   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:28.466511   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:28.695880   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:28.857905   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:28.965279   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:28.965380   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:29.195323   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:29.358626   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:29.464956   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:29.465618   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:29.695700   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:29.858664   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:29.964931   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:29.965859   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:30.196028   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:30.358134   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:30.465430   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:30.465774   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:30.696221   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:30.766601   12370 node_ready.go:58] node "addons-931360" has status "Ready":"False"
	I1127 23:26:30.858410   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:30.965804   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:30.965830   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:31.244996   12370 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 23:26:31.245023   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:31.266486   12370 node_ready.go:49] node "addons-931360" has status "Ready":"True"
	I1127 23:26:31.266509   12370 node_ready.go:38] duration metric: took 33.612111263s waiting for node "addons-931360" to be "Ready" ...
	I1127 23:26:31.266526   12370 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:26:31.274226   12370 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vqrwf" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:31.367963   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:31.465952   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:31.466068   12370 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 23:26:31.466093   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:31.696325   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:31.859252   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:31.966393   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:31.966797   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:32.197385   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:32.357506   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:32.466279   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:32.466329   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:32.697140   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:32.858439   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:32.966128   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:32.966161   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:33.197048   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:33.290735   12370 pod_ready.go:92] pod "coredns-5dd5756b68-vqrwf" in "kube-system" namespace has status "Ready":"True"
	I1127 23:26:33.290757   12370 pod_ready.go:81] duration metric: took 2.016508541s waiting for pod "coredns-5dd5756b68-vqrwf" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.290775   12370 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-931360" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.294989   12370 pod_ready.go:92] pod "etcd-addons-931360" in "kube-system" namespace has status "Ready":"True"
	I1127 23:26:33.295009   12370 pod_ready.go:81] duration metric: took 4.227417ms waiting for pod "etcd-addons-931360" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.295019   12370 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-931360" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.299418   12370 pod_ready.go:92] pod "kube-apiserver-addons-931360" in "kube-system" namespace has status "Ready":"True"
	I1127 23:26:33.299438   12370 pod_ready.go:81] duration metric: took 4.41144ms waiting for pod "kube-apiserver-addons-931360" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.299450   12370 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-931360" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.304091   12370 pod_ready.go:92] pod "kube-controller-manager-addons-931360" in "kube-system" namespace has status "Ready":"True"
	I1127 23:26:33.304110   12370 pod_ready.go:81] duration metric: took 4.652213ms waiting for pod "kube-controller-manager-addons-931360" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.304123   12370 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-szskt" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.358213   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:33.466185   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:33.466387   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:33.667238   12370 pod_ready.go:92] pod "kube-proxy-szskt" in "kube-system" namespace has status "Ready":"True"
	I1127 23:26:33.667263   12370 pod_ready.go:81] duration metric: took 363.131633ms waiting for pod "kube-proxy-szskt" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.667276   12370 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-931360" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:33.697000   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:33.858398   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:33.966079   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:33.966232   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:34.067354   12370 pod_ready.go:92] pod "kube-scheduler-addons-931360" in "kube-system" namespace has status "Ready":"True"
	I1127 23:26:34.067379   12370 pod_ready.go:81] duration metric: took 400.094535ms waiting for pod "kube-scheduler-addons-931360" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:34.067393   12370 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:34.196605   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:34.358848   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:34.466043   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:34.466944   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:34.696957   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:34.858302   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:34.966274   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:34.966508   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:35.199397   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:35.358237   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:35.465729   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:35.465973   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:35.697155   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:35.858600   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:35.965489   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:35.965896   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:36.196811   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:36.358562   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:36.372313   12370 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:36.466075   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:36.466258   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:36.747486   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:36.858500   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:36.965821   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:36.966133   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:37.196886   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:37.358007   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:37.465709   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:37.465896   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:37.751041   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:37.858515   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:37.966041   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:37.966115   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:38.196592   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:38.358303   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:38.373710   12370 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:38.465670   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:38.466014   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:38.697162   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:38.858318   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:38.966204   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:38.966252   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:39.197572   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:39.358856   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:39.466667   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:39.466789   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:39.697451   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:39.858477   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:39.966127   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:39.966323   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:40.196269   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:40.358705   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:40.466309   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:40.466311   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:40.696871   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:40.857981   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:40.872848   12370 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:40.966489   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:40.967261   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:41.196614   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:41.357879   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:41.465826   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:41.466190   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:41.697028   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:41.858730   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:41.966414   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:41.966423   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:42.196722   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:42.358421   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:42.466255   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:42.466255   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:42.696876   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:42.858201   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:42.875397   12370 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:42.967454   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:42.968328   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:43.196259   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:43.359068   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:43.466534   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:43.467574   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:43.747702   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:43.859071   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:43.970767   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:43.972196   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:44.249130   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:44.358853   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:44.466775   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:44.467524   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:44.746474   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:44.858793   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:44.966456   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:44.967573   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:45.196543   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:45.357882   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:45.372464   12370 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:45.466964   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:45.467125   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:45.697566   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:45.858810   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:45.965607   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:45.965740   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:46.197503   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:46.359098   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:46.466272   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:46.466441   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:46.697894   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:46.858351   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:46.966815   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:46.967463   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:47.197515   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:47.357906   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:47.372563   12370 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:47.465745   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:47.466228   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:47.697208   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:47.858731   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:47.966239   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:47.966517   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:48.248552   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:48.358724   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:48.472553   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:48.544585   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:48.698943   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:48.859075   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:48.967669   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:48.968246   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:49.198325   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:49.358431   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:49.466410   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:49.466418   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:49.696943   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:49.857721   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:49.872684   12370 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:49.966012   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:49.966286   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:50.196389   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:50.358728   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:50.468106   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:50.468139   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:50.696098   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:50.860767   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:50.966049   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:50.966784   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:51.197327   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:51.358241   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:51.466106   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:51.466231   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:51.696710   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:51.858347   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:51.966152   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:51.966514   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:52.244099   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:52.357742   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:52.373303   12370 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:52.466236   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:52.466239   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:52.696378   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:52.858714   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:52.966515   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:52.966718   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:53.254279   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:53.365001   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:53.451876   12370 pod_ready.go:92] pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace has status "Ready":"True"
	I1127 23:26:53.451967   12370 pod_ready.go:81] duration metric: took 19.384565291s waiting for pod "metrics-server-7c66d45ddc-7rcvh" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:53.451996   12370 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-497hr" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:53.470845   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:53.545572   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:53.745676   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:53.858893   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:53.966604   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:53.967258   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:54.197159   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:54.358431   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:54.466456   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:54.466575   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:54.697048   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:54.858528   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:54.966308   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:54.966692   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:55.196836   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:55.358560   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:55.466936   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:55.466952   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:55.561721   12370 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-497hr" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:55.697996   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:55.859439   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:55.966566   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:55.966726   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:56.247778   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:56.358663   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:56.466136   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:56.466175   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:56.698403   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:56.858351   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:56.965864   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:56.966009   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:57.198405   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:57.359032   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:57.467935   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:57.468012   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:57.697000   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:57.859429   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:57.966039   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:57.966469   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:58.061845   12370 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-497hr" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:58.197537   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:58.358697   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:58.465780   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:58.465907   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:58.697314   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:58.858566   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:58.965748   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:58.965868   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:59.196586   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:59.358089   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:59.465889   12370 kapi.go:107] duration metric: took 55.510992868s to wait for kubernetes.io/minikube-addons=registry ...
	I1127 23:26:59.465898   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:59.697165   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:59.858453   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:59.966262   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:00.196880   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:00.358516   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:00.466204   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:00.561452   12370 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-497hr" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:00.748219   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:00.859245   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:00.967751   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:01.252142   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:01.359500   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:01.467022   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:01.745074   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:01.858407   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:01.966599   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:02.245149   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:02.359091   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:02.467044   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:02.561521   12370 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-497hr" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:02.697147   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:02.858920   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:02.966963   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:03.245636   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:03.358594   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:03.466690   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:03.696510   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:03.858829   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:03.967038   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:04.196992   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:04.358642   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:04.466278   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:04.696937   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:04.858224   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:04.966196   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:05.060763   12370 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-497hr" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:05.197005   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:05.357930   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:05.465617   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:05.559849   12370 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-497hr" in "kube-system" namespace has status "Ready":"True"
	I1127 23:27:05.559871   12370 pod_ready.go:81] duration metric: took 12.107840155s waiting for pod "nvidia-device-plugin-daemonset-497hr" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:05.559887   12370 pod_ready.go:38] duration metric: took 34.293350324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:27:05.559902   12370 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:27:05.559940   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 23:27:05.559985   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 23:27:05.592789   12370 cri.go:89] found id: "86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370"
	I1127 23:27:05.592818   12370 cri.go:89] found id: ""
	I1127 23:27:05.592829   12370 logs.go:284] 1 containers: [86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370]
	I1127 23:27:05.592880   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:05.596393   12370 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 23:27:05.596455   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 23:27:05.630455   12370 cri.go:89] found id: "9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6"
	I1127 23:27:05.630476   12370 cri.go:89] found id: ""
	I1127 23:27:05.630484   12370 logs.go:284] 1 containers: [9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6]
	I1127 23:27:05.630537   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:05.633724   12370 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 23:27:05.633775   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 23:27:05.665333   12370 cri.go:89] found id: "03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9"
	I1127 23:27:05.665356   12370 cri.go:89] found id: ""
	I1127 23:27:05.665363   12370 logs.go:284] 1 containers: [03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9]
	I1127 23:27:05.665402   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:05.668458   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 23:27:05.668512   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 23:27:05.697141   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:05.706069   12370 cri.go:89] found id: "f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f"
	I1127 23:27:05.706095   12370 cri.go:89] found id: ""
	I1127 23:27:05.706104   12370 logs.go:284] 1 containers: [f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f]
	I1127 23:27:05.706159   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:05.709448   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 23:27:05.709516   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 23:27:05.749630   12370 cri.go:89] found id: "e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088"
	I1127 23:27:05.749649   12370 cri.go:89] found id: ""
	I1127 23:27:05.749657   12370 logs.go:284] 1 containers: [e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088]
	I1127 23:27:05.749703   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:05.752845   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 23:27:05.752900   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 23:27:05.784013   12370 cri.go:89] found id: "b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d"
	I1127 23:27:05.784039   12370 cri.go:89] found id: ""
	I1127 23:27:05.784049   12370 logs.go:284] 1 containers: [b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d]
	I1127 23:27:05.784094   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:05.787293   12370 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 23:27:05.787353   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 23:27:05.828092   12370 cri.go:89] found id: "154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0"
	I1127 23:27:05.828122   12370 cri.go:89] found id: ""
	I1127 23:27:05.828132   12370 logs.go:284] 1 containers: [154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0]
	I1127 23:27:05.828188   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:05.832284   12370 logs.go:123] Gathering logs for container status ...
	I1127 23:27:05.832314   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 23:27:05.858376   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:05.878136   12370 logs.go:123] Gathering logs for etcd [9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6] ...
	I1127 23:27:05.878165   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6"
	I1127 23:27:05.961666   12370 logs.go:123] Gathering logs for coredns [03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9] ...
	I1127 23:27:05.961698   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9"
	I1127 23:27:05.966184   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:05.999977   12370 logs.go:123] Gathering logs for kube-scheduler [f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f] ...
	I1127 23:27:06.000010   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f"
	I1127 23:27:06.075464   12370 logs.go:123] Gathering logs for kube-proxy [e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088] ...
	I1127 23:27:06.075493   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088"
	I1127 23:27:06.146046   12370 logs.go:123] Gathering logs for kindnet [154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0] ...
	I1127 23:27:06.146096   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0"
	I1127 23:27:06.187719   12370 logs.go:123] Gathering logs for CRI-O ...
	I1127 23:27:06.187745   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 23:27:06.196729   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:06.262141   12370 logs.go:123] Gathering logs for kubelet ...
	I1127 23:27:06.262185   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1127 23:27:06.347163   12370 logs.go:123] Gathering logs for dmesg ...
	I1127 23:27:06.347212   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 23:27:06.359069   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:06.360772   12370 logs.go:123] Gathering logs for describe nodes ...
	I1127 23:27:06.360799   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 23:27:06.466185   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:06.561302   12370 logs.go:123] Gathering logs for kube-apiserver [86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370] ...
	I1127 23:27:06.561337   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370"
	I1127 23:27:06.608550   12370 logs.go:123] Gathering logs for kube-controller-manager [b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d] ...
	I1127 23:27:06.608582   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d"
	I1127 23:27:06.698377   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:06.858693   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:06.965424   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:07.196174   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:07.415317   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:07.555612   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:07.696525   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:07.859079   12370 kapi.go:107] duration metric: took 1m1.510182675s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1127 23:27:07.861748   12370 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-931360 cluster.
	I1127 23:27:07.863231   12370 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1127 23:27:07.864548   12370 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1127 23:27:07.966564   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:08.196553   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:08.544047   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:08.749028   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:08.966801   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:09.203886   12370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:27:09.247875   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:09.343607   12370 api_server.go:72] duration metric: took 1m11.809709665s to wait for apiserver process to appear ...
	I1127 23:27:09.343706   12370 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:27:09.343775   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 23:27:09.343873   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 23:27:09.464087   12370 cri.go:89] found id: "86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370"
	I1127 23:27:09.464119   12370 cri.go:89] found id: ""
	I1127 23:27:09.464128   12370 logs.go:284] 1 containers: [86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370]
	I1127 23:27:09.464176   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:09.466640   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:09.468714   12370 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 23:27:09.468773   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 23:27:09.652488   12370 cri.go:89] found id: "9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6"
	I1127 23:27:09.652557   12370 cri.go:89] found id: ""
	I1127 23:27:09.652570   12370 logs.go:284] 1 containers: [9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6]
	I1127 23:27:09.652621   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:09.656733   12370 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 23:27:09.656799   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 23:27:09.746600   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:09.766971   12370 cri.go:89] found id: "03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9"
	I1127 23:27:09.766995   12370 cri.go:89] found id: ""
	I1127 23:27:09.767004   12370 logs.go:284] 1 containers: [03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9]
	I1127 23:27:09.767051   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:09.770696   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 23:27:09.770752   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 23:27:09.952147   12370 cri.go:89] found id: "f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f"
	I1127 23:27:09.952172   12370 cri.go:89] found id: ""
	I1127 23:27:09.952194   12370 logs.go:284] 1 containers: [f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f]
	I1127 23:27:09.952246   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:09.956348   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 23:27:09.956430   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 23:27:09.966909   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:10.066064   12370 cri.go:89] found id: "e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088"
	I1127 23:27:10.066090   12370 cri.go:89] found id: ""
	I1127 23:27:10.066100   12370 logs.go:284] 1 containers: [e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088]
	I1127 23:27:10.066158   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:10.073422   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 23:27:10.073490   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 23:27:10.247724   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:10.249355   12370 cri.go:89] found id: "b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d"
	I1127 23:27:10.249376   12370 cri.go:89] found id: ""
	I1127 23:27:10.249385   12370 logs.go:284] 1 containers: [b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d]
	I1127 23:27:10.249432   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:10.253122   12370 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 23:27:10.253174   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 23:27:10.360738   12370 cri.go:89] found id: "154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0"
	I1127 23:27:10.360765   12370 cri.go:89] found id: ""
	I1127 23:27:10.360777   12370 logs.go:284] 1 containers: [154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0]
	I1127 23:27:10.360833   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:10.364667   12370 logs.go:123] Gathering logs for kindnet [154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0] ...
	I1127 23:27:10.364694   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0"
	I1127 23:27:10.466447   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:10.470172   12370 logs.go:123] Gathering logs for CRI-O ...
	I1127 23:27:10.470203   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 23:27:10.629129   12370 logs.go:123] Gathering logs for kubelet ...
	I1127 23:27:10.629169   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1127 23:27:10.724894   12370 logs.go:123] Gathering logs for etcd [9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6] ...
	I1127 23:27:10.724931   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6"
	I1127 23:27:10.745992   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:10.872056   12370 logs.go:123] Gathering logs for coredns [03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9] ...
	I1127 23:27:10.872087   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9"
	I1127 23:27:10.953721   12370 logs.go:123] Gathering logs for kube-proxy [e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088] ...
	I1127 23:27:10.953754   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088"
	I1127 23:27:10.967106   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:10.990178   12370 logs.go:123] Gathering logs for kube-controller-manager [b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d] ...
	I1127 23:27:10.990213   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d"
	I1127 23:27:11.100399   12370 logs.go:123] Gathering logs for dmesg ...
	I1127 23:27:11.100437   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 23:27:11.112490   12370 logs.go:123] Gathering logs for describe nodes ...
	I1127 23:27:11.112516   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 23:27:11.198178   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:11.270765   12370 logs.go:123] Gathering logs for kube-apiserver [86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370] ...
	I1127 23:27:11.270794   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370"
	I1127 23:27:11.368916   12370 logs.go:123] Gathering logs for kube-scheduler [f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f] ...
	I1127 23:27:11.368952   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f"
	I1127 23:27:11.464096   12370 logs.go:123] Gathering logs for container status ...
	I1127 23:27:11.464130   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 23:27:11.467104   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:11.696500   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:11.965899   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:12.197430   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:12.466130   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:12.697041   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:12.965712   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:13.197751   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:13.467075   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:13.696618   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:13.966326   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:14.004418   12370 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1127 23:27:14.047471   12370 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1127 23:27:14.048512   12370 api_server.go:141] control plane version: v1.28.4
	I1127 23:27:14.048534   12370 api_server.go:131] duration metric: took 4.704810035s to wait for apiserver health ...
	I1127 23:27:14.048542   12370 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:27:14.048563   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 23:27:14.048620   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 23:27:14.083067   12370 cri.go:89] found id: "86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370"
	I1127 23:27:14.083095   12370 cri.go:89] found id: ""
	I1127 23:27:14.083104   12370 logs.go:284] 1 containers: [86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370]
	I1127 23:27:14.083159   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:14.086411   12370 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 23:27:14.086473   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 23:27:14.155584   12370 cri.go:89] found id: "9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6"
	I1127 23:27:14.155608   12370 cri.go:89] found id: ""
	I1127 23:27:14.155615   12370 logs.go:284] 1 containers: [9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6]
	I1127 23:27:14.155663   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:14.159061   12370 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 23:27:14.159138   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 23:27:14.195475   12370 cri.go:89] found id: "03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9"
	I1127 23:27:14.195499   12370 cri.go:89] found id: ""
	I1127 23:27:14.195508   12370 logs.go:284] 1 containers: [03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9]
	I1127 23:27:14.195557   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:14.197264   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:14.199057   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 23:27:14.199107   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 23:27:14.254167   12370 cri.go:89] found id: "f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f"
	I1127 23:27:14.254189   12370 cri.go:89] found id: ""
	I1127 23:27:14.254196   12370 logs.go:284] 1 containers: [f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f]
	I1127 23:27:14.254241   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:14.257514   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 23:27:14.257577   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 23:27:14.294006   12370 cri.go:89] found id: "e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088"
	I1127 23:27:14.294027   12370 cri.go:89] found id: ""
	I1127 23:27:14.294034   12370 logs.go:284] 1 containers: [e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088]
	I1127 23:27:14.294097   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:14.297309   12370 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 23:27:14.297362   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 23:27:14.330837   12370 cri.go:89] found id: "b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d"
	I1127 23:27:14.330867   12370 cri.go:89] found id: ""
	I1127 23:27:14.330877   12370 logs.go:284] 1 containers: [b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d]
	I1127 23:27:14.330926   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:14.345451   12370 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 23:27:14.345525   12370 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 23:27:14.382218   12370 cri.go:89] found id: "154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0"
	I1127 23:27:14.382244   12370 cri.go:89] found id: ""
	I1127 23:27:14.382254   12370 logs.go:284] 1 containers: [154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0]
	I1127 23:27:14.382302   12370 ssh_runner.go:195] Run: which crictl
	I1127 23:27:14.385916   12370 logs.go:123] Gathering logs for kube-proxy [e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088] ...
	I1127 23:27:14.385938   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088"
	I1127 23:27:14.456631   12370 logs.go:123] Gathering logs for kindnet [154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0] ...
	I1127 23:27:14.456667   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0"
	I1127 23:27:14.466389   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:14.490537   12370 logs.go:123] Gathering logs for kubelet ...
	I1127 23:27:14.490562   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1127 23:27:14.562662   12370 logs.go:123] Gathering logs for dmesg ...
	I1127 23:27:14.562694   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 23:27:14.574159   12370 logs.go:123] Gathering logs for describe nodes ...
	I1127 23:27:14.574185   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 23:27:14.681079   12370 logs.go:123] Gathering logs for etcd [9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6] ...
	I1127 23:27:14.681106   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6"
	I1127 23:27:14.697226   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:14.770356   12370 logs.go:123] Gathering logs for coredns [03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9] ...
	I1127 23:27:14.770391   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9"
	I1127 23:27:14.858364   12370 logs.go:123] Gathering logs for kube-scheduler [f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f] ...
	I1127 23:27:14.858406   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f"
	I1127 23:27:14.966838   12370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:15.058310   12370 logs.go:123] Gathering logs for container status ...
	I1127 23:27:15.058344   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 23:27:15.249816   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:15.254148   12370 logs.go:123] Gathering logs for kube-apiserver [86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370] ...
	I1127 23:27:15.254178   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370"
	I1127 23:27:15.381278   12370 logs.go:123] Gathering logs for kube-controller-manager [b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d] ...
	I1127 23:27:15.381315   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d"
	I1127 23:27:15.466700   12370 kapi.go:107] duration metric: took 1m11.513113419s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1127 23:27:15.502387   12370 logs.go:123] Gathering logs for CRI-O ...
	I1127 23:27:15.502417   12370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 23:27:15.696453   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:16.196984   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:16.696535   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:17.196557   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:17.698184   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:18.126869   12370 system_pods.go:59] 19 kube-system pods found
	I1127 23:27:18.126905   12370 system_pods.go:61] "coredns-5dd5756b68-vqrwf" [19992107-5032-47f6-8715-ffa05ea587f5] Running
	I1127 23:27:18.126914   12370 system_pods.go:61] "csi-hostpath-attacher-0" [fbe9d187-56a0-44ce-b511-6dd7c3383618] Running
	I1127 23:27:18.126920   12370 system_pods.go:61] "csi-hostpath-resizer-0" [81d40e76-54a7-4b83-974f-a0629e4c85d1] Running
	I1127 23:27:18.126931   12370 system_pods.go:61] "csi-hostpathplugin-c5shf" [bb761e7c-4634-40a5-a17e-04a560e68f47] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1127 23:27:18.126938   12370 system_pods.go:61] "etcd-addons-931360" [165277f4-155e-43d0-9360-8c99128a7663] Running
	I1127 23:27:18.126945   12370 system_pods.go:61] "kindnet-pcpbf" [7665ce9a-43ed-4ae0-b22a-aa2c1a5b43f5] Running
	I1127 23:27:18.126949   12370 system_pods.go:61] "kube-apiserver-addons-931360" [b9df0c78-fbfc-4c48-829f-9cec92b225cf] Running
	I1127 23:27:18.126958   12370 system_pods.go:61] "kube-controller-manager-addons-931360" [fb1626eb-2909-409b-b6d5-5fbd61f9682e] Running
	I1127 23:27:18.126963   12370 system_pods.go:61] "kube-ingress-dns-minikube" [db96c30c-42e1-4554-857c-ecda27ca99ed] Running
	I1127 23:27:18.126968   12370 system_pods.go:61] "kube-proxy-szskt" [ab048c1d-3cfe-4d76-a6ce-c5370a20ccc3] Running
	I1127 23:27:18.126973   12370 system_pods.go:61] "kube-scheduler-addons-931360" [4ded2a18-8516-45f0-9e70-cafc7ca6d96c] Running
	I1127 23:27:18.126980   12370 system_pods.go:61] "metrics-server-7c66d45ddc-7rcvh" [9b447e1e-d353-4274-b7ab-31ae20a302f2] Running
	I1127 23:27:18.126984   12370 system_pods.go:61] "nvidia-device-plugin-daemonset-497hr" [3faeada4-0ee7-4d44-81b3-200c71fd40b5] Running
	I1127 23:27:18.126990   12370 system_pods.go:61] "registry-proxy-8qd9d" [2537aa5b-0543-4d68-a3bc-91099fbe1789] Running
	I1127 23:27:18.126994   12370 system_pods.go:61] "registry-qpl48" [0b324504-49a4-4094-95c0-5738fb210318] Running
	I1127 23:27:18.127001   12370 system_pods.go:61] "snapshot-controller-58dbcc7b99-cjpbj" [3da64b45-68ee-4137-9182-6dec74ae7f59] Running
	I1127 23:27:18.127005   12370 system_pods.go:61] "snapshot-controller-58dbcc7b99-sxc2d" [11463a85-3dc0-4b72-bb04-c82ee19897b7] Running
	I1127 23:27:18.127011   12370 system_pods.go:61] "storage-provisioner" [557ed4ac-b898-4f2c-a162-e19af4bedbf8] Running
	I1127 23:27:18.127015   12370 system_pods.go:61] "tiller-deploy-7b677967b9-6vhzg" [f0db5041-51d0-4f7b-bd01-63edd775e33b] Running
	I1127 23:27:18.127023   12370 system_pods.go:74] duration metric: took 4.078475066s to wait for pod list to return data ...
	I1127 23:27:18.127030   12370 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:27:18.129371   12370 default_sa.go:45] found service account: "default"
	I1127 23:27:18.129392   12370 default_sa.go:55] duration metric: took 2.356089ms for default service account to be created ...
	I1127 23:27:18.129399   12370 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:27:18.136924   12370 system_pods.go:86] 19 kube-system pods found
	I1127 23:27:18.136948   12370 system_pods.go:89] "coredns-5dd5756b68-vqrwf" [19992107-5032-47f6-8715-ffa05ea587f5] Running
	I1127 23:27:18.136954   12370 system_pods.go:89] "csi-hostpath-attacher-0" [fbe9d187-56a0-44ce-b511-6dd7c3383618] Running
	I1127 23:27:18.136958   12370 system_pods.go:89] "csi-hostpath-resizer-0" [81d40e76-54a7-4b83-974f-a0629e4c85d1] Running
	I1127 23:27:18.136966   12370 system_pods.go:89] "csi-hostpathplugin-c5shf" [bb761e7c-4634-40a5-a17e-04a560e68f47] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1127 23:27:18.136972   12370 system_pods.go:89] "etcd-addons-931360" [165277f4-155e-43d0-9360-8c99128a7663] Running
	I1127 23:27:18.136981   12370 system_pods.go:89] "kindnet-pcpbf" [7665ce9a-43ed-4ae0-b22a-aa2c1a5b43f5] Running
	I1127 23:27:18.136985   12370 system_pods.go:89] "kube-apiserver-addons-931360" [b9df0c78-fbfc-4c48-829f-9cec92b225cf] Running
	I1127 23:27:18.136989   12370 system_pods.go:89] "kube-controller-manager-addons-931360" [fb1626eb-2909-409b-b6d5-5fbd61f9682e] Running
	I1127 23:27:18.136996   12370 system_pods.go:89] "kube-ingress-dns-minikube" [db96c30c-42e1-4554-857c-ecda27ca99ed] Running
	I1127 23:27:18.136999   12370 system_pods.go:89] "kube-proxy-szskt" [ab048c1d-3cfe-4d76-a6ce-c5370a20ccc3] Running
	I1127 23:27:18.137006   12370 system_pods.go:89] "kube-scheduler-addons-931360" [4ded2a18-8516-45f0-9e70-cafc7ca6d96c] Running
	I1127 23:27:18.137011   12370 system_pods.go:89] "metrics-server-7c66d45ddc-7rcvh" [9b447e1e-d353-4274-b7ab-31ae20a302f2] Running
	I1127 23:27:18.137015   12370 system_pods.go:89] "nvidia-device-plugin-daemonset-497hr" [3faeada4-0ee7-4d44-81b3-200c71fd40b5] Running
	I1127 23:27:18.137019   12370 system_pods.go:89] "registry-proxy-8qd9d" [2537aa5b-0543-4d68-a3bc-91099fbe1789] Running
	I1127 23:27:18.137023   12370 system_pods.go:89] "registry-qpl48" [0b324504-49a4-4094-95c0-5738fb210318] Running
	I1127 23:27:18.137027   12370 system_pods.go:89] "snapshot-controller-58dbcc7b99-cjpbj" [3da64b45-68ee-4137-9182-6dec74ae7f59] Running
	I1127 23:27:18.137030   12370 system_pods.go:89] "snapshot-controller-58dbcc7b99-sxc2d" [11463a85-3dc0-4b72-bb04-c82ee19897b7] Running
	I1127 23:27:18.137034   12370 system_pods.go:89] "storage-provisioner" [557ed4ac-b898-4f2c-a162-e19af4bedbf8] Running
	I1127 23:27:18.137038   12370 system_pods.go:89] "tiller-deploy-7b677967b9-6vhzg" [f0db5041-51d0-4f7b-bd01-63edd775e33b] Running
	I1127 23:27:18.137043   12370 system_pods.go:126] duration metric: took 7.639708ms to wait for k8s-apps to be running ...
	I1127 23:27:18.137050   12370 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:27:18.137084   12370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:27:18.147757   12370 system_svc.go:56] duration metric: took 10.702129ms WaitForService to wait for kubelet.
	I1127 23:27:18.147781   12370 kubeadm.go:581] duration metric: took 1m20.613892246s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:27:18.147799   12370 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:27:18.150454   12370 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 23:27:18.150479   12370 node_conditions.go:123] node cpu capacity is 8
	I1127 23:27:18.150489   12370 node_conditions.go:105] duration metric: took 2.686257ms to run NodePressure ...
	I1127 23:27:18.150501   12370 start.go:228] waiting for startup goroutines ...
	I1127 23:27:18.197096   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:18.696351   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:19.196932   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:19.696694   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:20.197013   12370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:20.697087   12370 kapi.go:107] duration metric: took 1m16.020396765s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1127 23:27:20.698996   12370 out.go:177] * Enabled addons: cloud-spanner, inspektor-gadget, ingress-dns, storage-provisioner, nvidia-device-plugin, helm-tiller, metrics-server, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1127 23:27:20.700673   12370 addons.go:502] enable addons completed in 1m23.250367378s: enabled=[cloud-spanner inspektor-gadget ingress-dns storage-provisioner nvidia-device-plugin helm-tiller metrics-server default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1127 23:27:20.700718   12370 start.go:233] waiting for cluster config update ...
	I1127 23:27:20.700734   12370 start.go:242] writing updated cluster config ...
	I1127 23:27:20.700977   12370 ssh_runner.go:195] Run: rm -f paused
	I1127 23:27:20.747499   12370 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 23:27:20.750630   12370 out.go:177] * Done! kubectl is now configured to use "addons-931360" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.701407881Z" level=info msg="Creating container: default/hello-world-app-5d77478584-mlnr8/hello-world-app" id=fdac1fa1-f897-4853-aeff-222517a448ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.701529450Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.705640731Z" level=info msg="Stopped container a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=355a9a70-168d-4fe1-9e93-9930a235435d name=/runtime.v1.RuntimeService/StopContainer
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.706094406Z" level=info msg="Stopping pod sandbox: 694d6a8fa7b35fe5ee774d6c7177ad791c0614cd3f5b11ebb2589fb4833c808c" id=671c81a1-25aa-4581-b8bc-878ef57f278a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.712472037Z" level=info msg="Stopped pod sandbox: 694d6a8fa7b35fe5ee774d6c7177ad791c0614cd3f5b11ebb2589fb4833c808c" id=671c81a1-25aa-4581-b8bc-878ef57f278a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.783358190Z" level=info msg="Created container da7dd9b2e4e9fb333c02e213a6d6ad0827fc7ade7d062da074e294d25798e773: default/hello-world-app-5d77478584-mlnr8/hello-world-app" id=fdac1fa1-f897-4853-aeff-222517a448ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.783953664Z" level=info msg="Starting container: da7dd9b2e4e9fb333c02e213a6d6ad0827fc7ade7d062da074e294d25798e773" id=6e16b282-39e1-4757-8162-90f0283eafb9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.792850787Z" level=info msg="Started container" PID=10784 containerID=da7dd9b2e4e9fb333c02e213a6d6ad0827fc7ade7d062da074e294d25798e773 description=default/hello-world-app-5d77478584-mlnr8/hello-world-app id=6e16b282-39e1-4757-8162-90f0283eafb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=55ba2523ee06992d5276ba489fd940885db3a3901dc48ab5737d33041bd12e97
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.863437506Z" level=info msg="Removing container: a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5" id=9a6fcdfd-12d7-4d30-ac02-318c7ccbbd8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:30:02 addons-931360 crio[949]: time="2023-11-27 23:30:02.879792494Z" level=info msg="Removed container a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=9a6fcdfd-12d7-4d30-ac02-318c7ccbbd8b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:30:04 addons-931360 crio[949]: time="2023-11-27 23:30:04.424551009Z" level=info msg="Stopping container: 785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71 (timeout: 2s)" id=cddaaaf2-42bb-4f8d-bbd7-2fffbcbec1b2 name=/runtime.v1.RuntimeService/StopContainer
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.432664515Z" level=warning msg="Stopping container 785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=cddaaaf2-42bb-4f8d-bbd7-2fffbcbec1b2 name=/runtime.v1.RuntimeService/StopContainer
	Nov 27 23:30:06 addons-931360 conmon[6486]: conmon 785156c83d83e69d72eb <ninfo>: container 6500 exited with status 137
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.576763364Z" level=info msg="Stopped container 785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71: ingress-nginx/ingress-nginx-controller-7c6974c4d8-qhpkz/controller" id=cddaaaf2-42bb-4f8d-bbd7-2fffbcbec1b2 name=/runtime.v1.RuntimeService/StopContainer
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.577292821Z" level=info msg="Stopping pod sandbox: 424da69e664f10e0c9cfd9c61c35e9cf03bc23565c51cde827669b25c9da268e" id=87d7ff12-dbfc-46e5-8175-f4fa51d78a97 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.580047398Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-65WVPEAGQTIOY3ID - [0:0]\n:KUBE-HP-LGCBWTEOIYMIOTI4 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-LGCBWTEOIYMIOTI4\n-X KUBE-HP-65WVPEAGQTIOY3ID\nCOMMIT\n"
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.581310259Z" level=info msg="Closing host port tcp:80"
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.581344962Z" level=info msg="Closing host port tcp:443"
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.582692356Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.582714963Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.582842374Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-qhpkz Namespace:ingress-nginx ID:424da69e664f10e0c9cfd9c61c35e9cf03bc23565c51cde827669b25c9da268e UID:0528bcb1-e003-4144-a61a-4cb187537faa NetNS:/var/run/netns/bd8ee761-1c43-470f-8630-1225fc0d5dc3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.582955993Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-qhpkz from CNI network \"kindnet\" (type=ptp)"
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.631354400Z" level=info msg="Stopped pod sandbox: 424da69e664f10e0c9cfd9c61c35e9cf03bc23565c51cde827669b25c9da268e" id=87d7ff12-dbfc-46e5-8175-f4fa51d78a97 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.873774054Z" level=info msg="Removing container: 785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71" id=83adb657-8726-4024-9e1b-725eafc0c496 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:30:06 addons-931360 crio[949]: time="2023-11-27 23:30:06.889851295Z" level=info msg="Removed container 785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71: ingress-nginx/ingress-nginx-controller-7c6974c4d8-qhpkz/controller" id=83adb657-8726-4024-9e1b-725eafc0c496 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da7dd9b2e4e9f       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   55ba2523ee069       hello-world-app-5d77478584-mlnr8
	06e4cb4796234       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   330ad0fa599fc       headlamp-777fd4b855-dgszv
	a8627b83e62df       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   8e43ba1be2bd1       nginx
	ec2f02b1c0df7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   4a7c4625d9367       gcp-auth-d4c87556c-r7f5z
	db1636a124aab       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     2                   63863b680bd74       ingress-nginx-admission-patch-xr4t4
	7c82ac42334a2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   9bc9b6d0d4c62       ingress-nginx-admission-create-nh776
	717004449f3fe       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   9e318b8a85ffc       local-path-provisioner-78b46b4d5c-74fnt
	373f56ce30d39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   b9d293c3da846       storage-provisioner
	03fe6725a1b4d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   bce9c38914038       coredns-5dd5756b68-vqrwf
	154a7d051f675       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   be8b867fbb54e       kindnet-pcpbf
	e1af6a7cc2a3e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   bdd9119a7bc1c       kube-proxy-szskt
	9de419682c01f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   1d00d30846df0       etcd-addons-931360
	b8a16d5d06388       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   4622ba1448736       kube-controller-manager-addons-931360
	f1363cb38fe87       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   f6fd98d169003       kube-scheduler-addons-931360
	86812d2dc287c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   07afc6c9a12bb       kube-apiserver-addons-931360
	
	* 
	* ==> coredns [03fe6725a1b4db1e0268ffbdddf784b68729c9fe7b3cb1f449de8c25c73841a9] <==
	* [INFO] 10.244.0.18:53454 - 14485 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046609s
	[INFO] 10.244.0.18:41091 - 10240 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004195772s
	[INFO] 10.244.0.18:41091 - 62268 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004143936s
	[INFO] 10.244.0.18:36760 - 58112 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005091969s
	[INFO] 10.244.0.18:36760 - 44860 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005452267s
	[INFO] 10.244.0.18:37167 - 4921 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005187318s
	[INFO] 10.244.0.18:37167 - 8508 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007172139s
	[INFO] 10.244.0.18:37893 - 50165 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063746s
	[INFO] 10.244.0.18:37893 - 29169 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00014542s
	[INFO] 10.244.0.19:46343 - 39827 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147902s
	[INFO] 10.244.0.19:54652 - 38248 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155799s
	[INFO] 10.244.0.19:54354 - 7287 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117187s
	[INFO] 10.244.0.19:50575 - 55041 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167598s
	[INFO] 10.244.0.19:43965 - 17065 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000927s
	[INFO] 10.244.0.19:46518 - 21706 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111789s
	[INFO] 10.244.0.19:53261 - 1288 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005777165s
	[INFO] 10.244.0.19:33299 - 10046 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006130825s
	[INFO] 10.244.0.19:34378 - 18899 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005905602s
	[INFO] 10.244.0.19:50831 - 36686 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006223939s
	[INFO] 10.244.0.19:51893 - 52678 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004297242s
	[INFO] 10.244.0.19:58835 - 63138 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006174972s
	[INFO] 10.244.0.19:37720 - 31947 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000700797s
	[INFO] 10.244.0.19:43038 - 1339 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000825254s
	[INFO] 10.244.0.24:38062 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000149082s
	[INFO] 10.244.0.24:53580 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001294s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-931360
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-931360
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=addons-931360
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_25_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-931360
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:25:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-931360
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:30:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:28:17 +0000   Mon, 27 Nov 2023 23:25:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:28:17 +0000   Mon, 27 Nov 2023 23:25:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:28:17 +0000   Mon, 27 Nov 2023 23:25:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:28:17 +0000   Mon, 27 Nov 2023 23:26:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-931360
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 228d7d10b93d414eb9edfacf097c6cff
	  System UUID:                934056f7-2731-4c57-8ae9-6a20d38aeeb2
	  Boot ID:                    ccf6e8a7-9afe-448c-b481-9ad79744adaf
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-mlnr8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-r7f5z                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  headlamp                    headlamp-777fd4b855-dgszv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 coredns-5dd5756b68-vqrwf                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m14s
	  kube-system                 etcd-addons-931360                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m27s
	  kube-system                 kindnet-pcpbf                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m14s
	  kube-system                 kube-apiserver-addons-931360               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-controller-manager-addons-931360      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-proxy-szskt                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-scheduler-addons-931360               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  local-path-storage          local-path-provisioner-78b46b4d5c-74fnt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 4m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m33s (x8 over 4m33s)  kubelet          Node addons-931360 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s (x8 over 4m33s)  kubelet          Node addons-931360 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s (x8 over 4m33s)  kubelet          Node addons-931360 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s                  kubelet          Node addons-931360 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s                  kubelet          Node addons-931360 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s                  kubelet          Node addons-931360 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m15s                  node-controller  Node addons-931360 event: Registered Node addons-931360 in Controller
	  Normal  NodeReady                3m40s                  kubelet          Node addons-931360 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007513] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003403] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000696] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000695] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000749] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000629] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000673] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001224] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.187217] kauditd_printk_skb: 36 callbacks suppressed
	[Nov27 23:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 d8 e8 e2 51 91 46 e0 38 52 35 4d 08 00
	[  +1.032128] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 46 d8 e8 e2 51 91 46 e0 38 52 35 4d 08 00
	[  +2.011801] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 d8 e8 e2 51 91 46 e0 38 52 35 4d 08 00
	[  +4.195537] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 d8 e8 e2 51 91 46 e0 38 52 35 4d 08 00
	[Nov27 23:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 d8 e8 e2 51 91 46 e0 38 52 35 4d 08 00
	[ +16.126368] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 46 d8 e8 e2 51 91 46 e0 38 52 35 4d 08 00
	[ +32.764599] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 d8 e8 e2 51 91 46 e0 38 52 35 4d 08 00
	
	* 
	* ==> etcd [9de419682c01f005cdb114cb47335ca1c1a72746abb30a1f0cd5f06ac741e9b6] <==
	* {"level":"warn","ts":"2023-11-27T23:26:00.849419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.249221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-931360\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-11-27T23:26:00.849442Z","caller":"traceutil/trace.go:171","msg":"trace[2097215815] range","detail":"{range_begin:/registry/minions/addons-931360; range_end:; response_count:1; response_revision:423; }","duration":"300.274878ms","start":"2023-11-27T23:26:00.54916Z","end":"2023-11-27T23:26:00.849435Z","steps":["trace[2097215815] 'agreement among raft nodes before linearized reading'  (duration: 300.212138ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:26:00.849462Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T23:26:00.549156Z","time spent":"300.300192ms","remote":"127.0.0.1:47528","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5678,"request content":"key:\"/registry/minions/addons-931360\" "}
	{"level":"info","ts":"2023-11-27T23:26:01.063006Z","caller":"traceutil/trace.go:171","msg":"trace[873367535] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"109.505978ms","start":"2023-11-27T23:26:00.95348Z","end":"2023-11-27T23:26:01.062986Z","steps":["trace[873367535] 'process raft request'  (duration: 89.054121ms)","trace[873367535] 'compare'  (duration: 19.923223ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T23:26:01.063262Z","caller":"traceutil/trace.go:171","msg":"trace[1792787601] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"108.842436ms","start":"2023-11-27T23:26:00.954407Z","end":"2023-11-27T23:26:01.063249Z","steps":["trace[1792787601] 'process raft request'  (duration: 108.260887ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:26:01.063567Z","caller":"traceutil/trace.go:171","msg":"trace[58772263] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"109.034453ms","start":"2023-11-27T23:26:00.954521Z","end":"2023-11-27T23:26:01.063556Z","steps":["trace[58772263] 'process raft request'  (duration: 108.209006ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:26:01.063596Z","caller":"traceutil/trace.go:171","msg":"trace[1447002509] linearizableReadLoop","detail":"{readStateIndex:440; appliedIndex:438; }","duration":"109.116433ms","start":"2023-11-27T23:26:00.954472Z","end":"2023-11-27T23:26:01.063589Z","steps":["trace[1447002509] 'read index received'  (duration: 88.004724ms)","trace[1447002509] 'applied index is now lower than readState.Index'  (duration: 21.110668ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-27T23:26:01.063647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.176312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-11-27T23:26:01.143117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.175424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-931360\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-11-27T23:26:01.1432Z","caller":"traceutil/trace.go:171","msg":"trace[1561727418] range","detail":"{range_begin:/registry/minions/addons-931360; range_end:; response_count:1; response_revision:428; }","duration":"188.267876ms","start":"2023-11-27T23:26:00.95492Z","end":"2023-11-27T23:26:01.143188Z","steps":["trace[1561727418] 'agreement among raft nodes before linearized reading'  (duration: 188.131615ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:26:01.14314Z","caller":"traceutil/trace.go:171","msg":"trace[1102320743] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:428; }","duration":"188.671723ms","start":"2023-11-27T23:26:00.954451Z","end":"2023-11-27T23:26:01.143123Z","steps":["trace[1102320743] 'agreement among raft nodes before linearized reading'  (duration: 109.155878ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:26:01.143213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.092088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-11-27T23:26:01.143431Z","caller":"traceutil/trace.go:171","msg":"trace[626655425] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:428; }","duration":"100.315652ms","start":"2023-11-27T23:26:01.043105Z","end":"2023-11-27T23:26:01.143421Z","steps":["trace[626655425] 'agreement among raft nodes before linearized reading'  (duration: 100.062871ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:26:01.85551Z","caller":"traceutil/trace.go:171","msg":"trace[1338266302] linearizableReadLoop","detail":"{readStateIndex:460; appliedIndex:456; }","duration":"113.066647ms","start":"2023-11-27T23:26:01.742427Z","end":"2023-11-27T23:26:01.855493Z","steps":["trace[1338266302] 'read index received'  (duration: 830.756µs)","trace[1338266302] 'applied index is now lower than readState.Index'  (duration: 112.235312ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T23:26:01.858467Z","caller":"traceutil/trace.go:171","msg":"trace[2098362376] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"193.939875ms","start":"2023-11-27T23:26:01.664511Z","end":"2023-11-27T23:26:01.85845Z","steps":["trace[2098362376] 'process raft request'  (duration: 189.651335ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:26:01.865646Z","caller":"traceutil/trace.go:171","msg":"trace[969283216] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"200.959072ms","start":"2023-11-27T23:26:01.664672Z","end":"2023-11-27T23:26:01.865631Z","steps":["trace[969283216] 'process raft request'  (duration: 190.63963ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:26:01.865772Z","caller":"traceutil/trace.go:171","msg":"trace[1668072562] transaction","detail":"{read_only:false; number_of_response:1; response_revision:443; }","duration":"200.968965ms","start":"2023-11-27T23:26:01.664786Z","end":"2023-11-27T23:26:01.865755Z","steps":["trace[1668072562] 'process raft request'  (duration: 190.569756ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:26:01.865872Z","caller":"traceutil/trace.go:171","msg":"trace[2111381949] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"200.934533ms","start":"2023-11-27T23:26:01.664931Z","end":"2023-11-27T23:26:01.865865Z","steps":["trace[2111381949] 'process raft request'  (duration: 190.454312ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:26:01.866023Z","caller":"traceutil/trace.go:171","msg":"trace[416704473] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"123.366479ms","start":"2023-11-27T23:26:01.742649Z","end":"2023-11-27T23:26:01.866016Z","steps":["trace[416704473] 'process raft request'  (duration: 112.807692ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:26:01.866152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.729458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-11-27T23:26:01.866174Z","caller":"traceutil/trace.go:171","msg":"trace[2115551639] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:445; }","duration":"123.763879ms","start":"2023-11-27T23:26:01.742404Z","end":"2023-11-27T23:26:01.866168Z","steps":["trace[2115551639] 'agreement among raft nodes before linearized reading'  (duration: 123.705217ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:27:07.552858Z","caller":"traceutil/trace.go:171","msg":"trace[2039015639] transaction","detail":"{read_only:false; response_revision:1108; number_of_response:1; }","duration":"132.965136ms","start":"2023-11-27T23:27:07.419872Z","end":"2023-11-27T23:27:07.552837Z","steps":["trace[2039015639] 'process raft request'  (duration: 132.751266ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:27:24.557401Z","caller":"traceutil/trace.go:171","msg":"trace[348714751] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"125.205259ms","start":"2023-11-27T23:27:24.432181Z","end":"2023-11-27T23:27:24.557386Z","steps":["trace[348714751] 'process raft request'  (duration: 125.09486ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:27:24.559104Z","caller":"traceutil/trace.go:171","msg":"trace[381547803] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"124.739817ms","start":"2023-11-27T23:27:24.43435Z","end":"2023-11-27T23:27:24.55909Z","steps":["trace[381547803] 'process raft request'  (duration: 124.642479ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:27:46.761837Z","caller":"traceutil/trace.go:171","msg":"trace[552692658] transaction","detail":"{read_only:false; response_revision:1451; number_of_response:1; }","duration":"115.175066ms","start":"2023-11-27T23:27:46.646648Z","end":"2023-11-27T23:27:46.761823Z","steps":["trace[552692658] 'process raft request'  (duration: 115.064821ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [ec2f02b1c0df70da94fa0788c96ea6eb728b03e6a5ceea902de015277b93a4b1] <==
	* 2023/11/27 23:27:06 GCP Auth Webhook started!
	2023/11/27 23:27:21 Ready to marshal response ...
	2023/11/27 23:27:21 Ready to write response ...
	2023/11/27 23:27:21 Ready to marshal response ...
	2023/11/27 23:27:21 Ready to write response ...
	2023/11/27 23:27:30 Ready to marshal response ...
	2023/11/27 23:27:30 Ready to write response ...
	2023/11/27 23:27:30 Ready to marshal response ...
	2023/11/27 23:27:30 Ready to write response ...
	2023/11/27 23:27:35 Ready to marshal response ...
	2023/11/27 23:27:35 Ready to write response ...
	2023/11/27 23:27:40 Ready to marshal response ...
	2023/11/27 23:27:40 Ready to write response ...
	2023/11/27 23:27:41 Ready to marshal response ...
	2023/11/27 23:27:41 Ready to write response ...
	2023/11/27 23:27:41 Ready to marshal response ...
	2023/11/27 23:27:41 Ready to write response ...
	2023/11/27 23:27:41 Ready to marshal response ...
	2023/11/27 23:27:41 Ready to write response ...
	2023/11/27 23:28:01 Ready to marshal response ...
	2023/11/27 23:28:01 Ready to write response ...
	2023/11/27 23:28:19 Ready to marshal response ...
	2023/11/27 23:28:19 Ready to write response ...
	2023/11/27 23:30:01 Ready to marshal response ...
	2023/11/27 23:30:01 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:30:11 up 12 min,  0 users,  load average: 0.33, 1.02, 0.57
	Linux addons-931360 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [154a7d051f67565f798dbee09f9db80f380ba6a29a01601b81138c31867614d0] <==
	* I1127 23:28:10.813603       1 main.go:227] handling current node
	I1127 23:28:20.817701       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:28:20.817724       1 main.go:227] handling current node
	I1127 23:28:30.821239       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:28:30.821263       1 main.go:227] handling current node
	I1127 23:28:40.826323       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:28:40.826363       1 main.go:227] handling current node
	I1127 23:28:50.837199       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:28:50.837221       1 main.go:227] handling current node
	I1127 23:29:00.841459       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:29:00.841483       1 main.go:227] handling current node
	I1127 23:29:10.853748       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:29:10.853771       1 main.go:227] handling current node
	I1127 23:29:20.857621       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:29:20.857644       1 main.go:227] handling current node
	I1127 23:29:30.869175       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:29:30.869199       1 main.go:227] handling current node
	I1127 23:29:40.874126       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:29:40.874148       1 main.go:227] handling current node
	I1127 23:29:50.885261       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:29:50.885287       1 main.go:227] handling current node
	I1127 23:30:00.889033       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:30:00.889056       1 main.go:227] handling current node
	I1127 23:30:10.901630       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:30:10.901655       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [86812d2dc287c946da981c60ee60c06aadc5c2ab2db5c7ce9acf4edadd910370] <==
	* I1127 23:27:40.638115       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1127 23:27:41.005881       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.185.194"}
	I1127 23:27:41.178636       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.255.195"}
	I1127 23:27:54.464671       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1127 23:28:13.811367       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1127 23:28:34.964556       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:34.964605       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:34.970972       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:34.971115       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:34.977866       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:34.977979       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:34.980337       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:34.980373       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:34.986952       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:34.987072       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:34.991946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:34.992056       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:35.000921       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:35.000962       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:35.001558       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:35.001645       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1127 23:28:35.980963       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1127 23:28:36.001205       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1127 23:28:36.052801       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1127 23:30:01.806407       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.247.186"}
	
	* 
	* ==> kube-controller-manager [b8a16d5d06388121f5f5c09fdda2330fde204b55cbc579f5c30302102dafc83d] <==
	* W1127 23:29:07.841720       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:29:07.841746       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:29:15.392736       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:29:15.392773       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:29:23.142596       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:29:23.142624       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:29:35.882758       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:29:35.882785       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:29:36.334141       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:29:36.334175       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:29:53.010492       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:29:53.010521       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:29:54.111068       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:29:54.111098       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 23:30:01.641791       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1127 23:30:01.654453       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-mlnr8"
	I1127 23:30:01.661115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.698034ms"
	I1127 23:30:01.666363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.184484ms"
	I1127 23:30:01.666453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.829µs"
	I1127 23:30:01.667628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.022µs"
	I1127 23:30:02.881254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.580797ms"
	I1127 23:30:02.881395       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.18µs"
	I1127 23:30:03.411799       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1127 23:30:03.412817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="5.361µs"
	I1127 23:30:03.416128       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	* 
	* ==> kube-proxy [e1af6a7cc2a3e504ce53d9fb97cebb26969056645d5b06660f5e873ea00c2088] <==
	* I1127 23:25:59.561355       1 server_others.go:69] "Using iptables proxy"
	I1127 23:25:59.746025       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1127 23:26:01.159063       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1127 23:26:01.451501       1 server_others.go:152] "Using iptables Proxier"
	I1127 23:26:01.451658       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1127 23:26:01.451716       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1127 23:26:01.451794       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 23:26:01.452212       1 server.go:846] "Version info" version="v1.28.4"
	I1127 23:26:01.452608       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 23:26:01.453587       1 config.go:188] "Starting service config controller"
	I1127 23:26:01.457117       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 23:26:01.455972       1 config.go:315] "Starting node config controller"
	I1127 23:26:01.457239       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 23:26:01.454329       1 config.go:97] "Starting endpoint slice config controller"
	I1127 23:26:01.462411       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 23:26:01.644418       1 shared_informer.go:318] Caches are synced for node config
	I1127 23:26:01.651557       1 shared_informer.go:318] Caches are synced for service config
	I1127 23:26:01.651577       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [f1363cb38fe8708db9d1f0e2c16e80f99162eaca5c8cb1de29e71377e7ced14f] <==
	* W1127 23:25:41.860478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1127 23:25:41.860547       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:25:41.860567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:25:41.860579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1127 23:25:41.860588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:25:41.860612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1127 23:25:41.860663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1127 23:25:41.860638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 23:25:41.860692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:25:41.860708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1127 23:25:41.860655       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:25:41.860727       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1127 23:25:41.860736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1127 23:25:41.860759       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1127 23:25:42.708252       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 23:25:42.708277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1127 23:25:42.728524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 23:25:42.728559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1127 23:25:42.750796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1127 23:25:42.750829       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1127 23:25:42.756961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:25:42.756993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1127 23:25:42.902689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:25:42.902725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1127 23:25:43.356910       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 27 23:30:01 addons-931360 kubelet[1557]: I1127 23:30:01.676054    1557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5e02327a-5e91-4746-bdfb-99f444e76fb9-gcp-creds\") pod \"hello-world-app-5d77478584-mlnr8\" (UID: \"5e02327a-5e91-4746-bdfb-99f444e76fb9\") " pod="default/hello-world-app-5d77478584-mlnr8"
	Nov 27 23:30:01 addons-931360 kubelet[1557]: I1127 23:30:01.676105    1557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdj6b\" (UniqueName: \"kubernetes.io/projected/5e02327a-5e91-4746-bdfb-99f444e76fb9-kube-api-access-pdj6b\") pod \"hello-world-app-5d77478584-mlnr8\" (UID: \"5e02327a-5e91-4746-bdfb-99f444e76fb9\") " pod="default/hello-world-app-5d77478584-mlnr8"
	Nov 27 23:30:02 addons-931360 kubelet[1557]: W1127 23:30:02.066929    1557 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/768276b8ed6b009fbef0cba4436e6d391c08de1488d7a17be7c751cf789af39b/crio-55ba2523ee06992d5276ba489fd940885db3a3901dc48ab5737d33041bd12e97 WatchSource:0}: Error finding container 55ba2523ee06992d5276ba489fd940885db3a3901dc48ab5737d33041bd12e97: Status 404 returned error can't find the container with id 55ba2523ee06992d5276ba489fd940885db3a3901dc48ab5737d33041bd12e97
	Nov 27 23:30:02 addons-931360 kubelet[1557]: I1127 23:30:02.785212    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pf6n\" (UniqueName: \"kubernetes.io/projected/db96c30c-42e1-4554-857c-ecda27ca99ed-kube-api-access-7pf6n\") pod \"db96c30c-42e1-4554-857c-ecda27ca99ed\" (UID: \"db96c30c-42e1-4554-857c-ecda27ca99ed\") "
	Nov 27 23:30:02 addons-931360 kubelet[1557]: I1127 23:30:02.787326    1557 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db96c30c-42e1-4554-857c-ecda27ca99ed-kube-api-access-7pf6n" (OuterVolumeSpecName: "kube-api-access-7pf6n") pod "db96c30c-42e1-4554-857c-ecda27ca99ed" (UID: "db96c30c-42e1-4554-857c-ecda27ca99ed"). InnerVolumeSpecName "kube-api-access-7pf6n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 27 23:30:02 addons-931360 kubelet[1557]: I1127 23:30:02.862501    1557 scope.go:117] "RemoveContainer" containerID="a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5"
	Nov 27 23:30:02 addons-931360 kubelet[1557]: I1127 23:30:02.874357    1557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-mlnr8" podStartSLOduration=1.245528171 podCreationTimestamp="2023-11-27 23:30:01 +0000 UTC" firstStartedPulling="2023-11-27 23:30:02.070270531 +0000 UTC m=+257.341678122" lastFinishedPulling="2023-11-27 23:30:02.699061517 +0000 UTC m=+257.970469107" observedRunningTime="2023-11-27 23:30:02.874116967 +0000 UTC m=+258.145524566" watchObservedRunningTime="2023-11-27 23:30:02.874319156 +0000 UTC m=+258.145726755"
	Nov 27 23:30:02 addons-931360 kubelet[1557]: I1127 23:30:02.880085    1557 scope.go:117] "RemoveContainer" containerID="a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5"
	Nov 27 23:30:02 addons-931360 kubelet[1557]: E1127 23:30:02.880539    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5\": container with ID starting with a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5 not found: ID does not exist" containerID="a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5"
	Nov 27 23:30:02 addons-931360 kubelet[1557]: I1127 23:30:02.880593    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5"} err="failed to get container status \"a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5\": rpc error: code = NotFound desc = could not find container \"a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5\": container with ID starting with a36d92bbb9c542c17c03b791de8ece6f1205f1dcf3dfc065ba10b086520297d5 not found: ID does not exist"
	Nov 27 23:30:02 addons-931360 kubelet[1557]: I1127 23:30:02.885716    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7pf6n\" (UniqueName: \"kubernetes.io/projected/db96c30c-42e1-4554-857c-ecda27ca99ed-kube-api-access-7pf6n\") on node \"addons-931360\" DevicePath \"\""
	Nov 27 23:30:04 addons-931360 kubelet[1557]: I1127 23:30:04.852725    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="85840a0f-4fad-4528-b992-6fd497c7ea6a" path="/var/lib/kubelet/pods/85840a0f-4fad-4528-b992-6fd497c7ea6a/volumes"
	Nov 27 23:30:04 addons-931360 kubelet[1557]: I1127 23:30:04.853195    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="db96c30c-42e1-4554-857c-ecda27ca99ed" path="/var/lib/kubelet/pods/db96c30c-42e1-4554-857c-ecda27ca99ed/volumes"
	Nov 27 23:30:04 addons-931360 kubelet[1557]: I1127 23:30:04.853728    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e5de0acd-37fe-4a03-9a9d-88e7e00faa77" path="/var/lib/kubelet/pods/e5de0acd-37fe-4a03-9a9d-88e7e00faa77/volumes"
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.714676    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0528bcb1-e003-4144-a61a-4cb187537faa-webhook-cert\") pod \"0528bcb1-e003-4144-a61a-4cb187537faa\" (UID: \"0528bcb1-e003-4144-a61a-4cb187537faa\") "
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.714734    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zktm8\" (UniqueName: \"kubernetes.io/projected/0528bcb1-e003-4144-a61a-4cb187537faa-kube-api-access-zktm8\") pod \"0528bcb1-e003-4144-a61a-4cb187537faa\" (UID: \"0528bcb1-e003-4144-a61a-4cb187537faa\") "
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.716618    1557 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0528bcb1-e003-4144-a61a-4cb187537faa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0528bcb1-e003-4144-a61a-4cb187537faa" (UID: "0528bcb1-e003-4144-a61a-4cb187537faa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.716702    1557 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0528bcb1-e003-4144-a61a-4cb187537faa-kube-api-access-zktm8" (OuterVolumeSpecName: "kube-api-access-zktm8") pod "0528bcb1-e003-4144-a61a-4cb187537faa" (UID: "0528bcb1-e003-4144-a61a-4cb187537faa"). InnerVolumeSpecName "kube-api-access-zktm8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.815392    1557 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0528bcb1-e003-4144-a61a-4cb187537faa-webhook-cert\") on node \"addons-931360\" DevicePath \"\""
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.815445    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zktm8\" (UniqueName: \"kubernetes.io/projected/0528bcb1-e003-4144-a61a-4cb187537faa-kube-api-access-zktm8\") on node \"addons-931360\" DevicePath \"\""
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.853264    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0528bcb1-e003-4144-a61a-4cb187537faa" path="/var/lib/kubelet/pods/0528bcb1-e003-4144-a61a-4cb187537faa/volumes"
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.872649    1557 scope.go:117] "RemoveContainer" containerID="785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71"
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.890113    1557 scope.go:117] "RemoveContainer" containerID="785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71"
	Nov 27 23:30:06 addons-931360 kubelet[1557]: E1127 23:30:06.890453    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71\": container with ID starting with 785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71 not found: ID does not exist" containerID="785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71"
	Nov 27 23:30:06 addons-931360 kubelet[1557]: I1127 23:30:06.890495    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71"} err="failed to get container status \"785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71\": rpc error: code = NotFound desc = could not find container \"785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71\": container with ID starting with 785156c83d83e69d72ebe37d526992186f68752b8847948b1f9e60ddb21bab71 not found: ID does not exist"
	
	* 
	* ==> storage-provisioner [373f56ce30d3921e996a675ba0aeca33e3f68731ea419b7449c07ff245e45729] <==
	* I1127 23:26:31.981995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 23:26:31.991097       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 23:26:31.991165       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 23:26:31.999242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 23:26:31.999361       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-931360_7b0c09da-1ef8-4327-9d74-b21f47f5f739!
	I1127 23:26:32.000112       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c4a050d-027e-4c58-899f-12e2402e0caa", APIVersion:"v1", ResourceVersion:"893", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-931360_7b0c09da-1ef8-4327-9d74-b21f47f5f739 became leader
	I1127 23:26:32.100340       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-931360_7b0c09da-1ef8-4327-9d74-b21f47f5f739!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-931360 -n addons-931360
helpers_test.go:261: (dbg) Run:  kubectl --context addons-931360 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image load --daemon gcr.io/google-containers/addon-resizer:functional-223758 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-223758 image load --daemon gcr.io/google-containers/addon-resizer:functional-223758 --alsologtostderr: (7.85269791s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-223758 image ls: (2.247741577s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-223758" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-719415 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-719415 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.408462321s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-719415 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-719415 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [aad9cb1e-07d5-4445-a153-c5010aa509d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [aad9cb1e-07d5-4445-a153-c5010aa509d0] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.007244749s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-719415 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1127 23:37:20.766215   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:37:48.449557   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-719415 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.574484593s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-719415 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-719415 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1127 23:38:51.043327   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:38:51.048649   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:38:51.058938   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:38:51.079207   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:38:51.119488   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:38:51.199839   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:38:51.360231   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:38:51.680831   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:38:52.321806   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.011984465s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-719415 addons disable ingress-dns --alsologtostderr -v=1
E1127 23:38:53.602810   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-719415 addons disable ingress-dns --alsologtostderr -v=1: (1.223717627s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-719415 addons disable ingress --alsologtostderr -v=1
E1127 23:38:56.164550   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:39:01.285177   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-719415 addons disable ingress --alsologtostderr -v=1: (7.40835207s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-719415
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-719415:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec1d92d113b71ebb603305ba9e43f5ac96daf7eacce4afee88ca4224a5833610",
	        "Created": "2023-11-27T23:34:52.409304084Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:34:52.692312013Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/ec1d92d113b71ebb603305ba9e43f5ac96daf7eacce4afee88ca4224a5833610/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec1d92d113b71ebb603305ba9e43f5ac96daf7eacce4afee88ca4224a5833610/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec1d92d113b71ebb603305ba9e43f5ac96daf7eacce4afee88ca4224a5833610/hosts",
	        "LogPath": "/var/lib/docker/containers/ec1d92d113b71ebb603305ba9e43f5ac96daf7eacce4afee88ca4224a5833610/ec1d92d113b71ebb603305ba9e43f5ac96daf7eacce4afee88ca4224a5833610-json.log",
	        "Name": "/ingress-addon-legacy-719415",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-719415:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-719415",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0262ca17eca89d22a1fa6d1ff7e10553f2b85e025f548fb62701540e879da2d1-init/diff:/var/lib/docker/overlay2/7130e71395072cd8dcd718fa28933a7b57b5714a10c6614947d04756418543ae/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0262ca17eca89d22a1fa6d1ff7e10553f2b85e025f548fb62701540e879da2d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0262ca17eca89d22a1fa6d1ff7e10553f2b85e025f548fb62701540e879da2d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0262ca17eca89d22a1fa6d1ff7e10553f2b85e025f548fb62701540e879da2d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-719415",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-719415/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-719415",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-719415",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-719415",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21e5838a1dda64662daf238588001fad44e0a2c4670ed332cecf388504fc18f6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/21e5838a1dda",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-719415": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ec1d92d113b7",
	                        "ingress-addon-legacy-719415"
	                    ],
	                    "NetworkID": "73ff262388599d8f8e725f6fabee50575dfb803c9f65397b7e4c98544ce659fc",
	                    "EndpointID": "443468483ed94896da037d36ac3733f76ecc3690c6a9395f21c854bd3bbe8833",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-719415 -n ingress-addon-legacy-719415
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-719415 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-719415 logs -n 25: (1.048590254s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-223758                                                      | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| ssh            | functional-223758 ssh findmnt                                          | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-223758                                                   | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2236456127/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| service        | functional-223758 service                                              | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | hello-node --url                                                       |                             |         |         |                     |                     |
	| update-context | functional-223758                                                      | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-223758                                                      | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-223758                                                      | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-223758 ssh findmnt                                          | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| image          | functional-223758                                                      | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-223758 ssh pgrep                                            | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| ssh            | functional-223758 ssh findmnt                                          | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| image          | functional-223758                                                      | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-223758 image build -t                                       | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | localhost/my-image:functional-223758                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| ssh            | functional-223758 ssh findmnt                                          | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| image          | functional-223758                                                      | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| mount          | -p functional-223758                                                   | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| image          | functional-223758 image ls                                             | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	| delete         | -p functional-223758                                                   | functional-223758           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	| start          | -p ingress-addon-legacy-719415                                         | ingress-addon-legacy-719415 | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:35 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-719415                                            | ingress-addon-legacy-719415 | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:36 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-719415                                            | ingress-addon-legacy-719415 | jenkins | v1.32.0 | 27 Nov 23 23:36 UTC | 27 Nov 23 23:36 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-719415                                            | ingress-addon-legacy-719415 | jenkins | v1.32.0 | 27 Nov 23 23:36 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-719415 ip                                         | ingress-addon-legacy-719415 | jenkins | v1.32.0 | 27 Nov 23 23:38 UTC | 27 Nov 23 23:38 UTC |
	| addons         | ingress-addon-legacy-719415                                            | ingress-addon-legacy-719415 | jenkins | v1.32.0 | 27 Nov 23 23:38 UTC | 27 Nov 23 23:38 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-719415                                            | ingress-addon-legacy-719415 | jenkins | v1.32.0 | 27 Nov 23 23:38 UTC | 27 Nov 23 23:39 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:34:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:34:40.712246   51588 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:34:40.712371   51588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:40.712379   51588 out.go:309] Setting ErrFile to fd 2...
	I1127 23:34:40.712384   51588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:40.712544   51588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:34:40.713153   51588 out.go:303] Setting JSON to false
	I1127 23:34:40.714362   51588 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1033,"bootTime":1701127048,"procs":635,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:34:40.714420   51588 start.go:138] virtualization: kvm guest
	I1127 23:34:40.716717   51588 out.go:177] * [ingress-addon-legacy-719415] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:34:40.718359   51588 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:34:40.718402   51588 notify.go:220] Checking for updates...
	I1127 23:34:40.719979   51588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:34:40.721478   51588 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:34:40.722928   51588 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:34:40.724448   51588 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:34:40.725979   51588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:34:40.727622   51588 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:34:40.749254   51588 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:34:40.749340   51588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:34:40.800292   51588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-11-27 23:34:40.792026702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:34:40.800379   51588 docker.go:295] overlay module found
	I1127 23:34:40.802447   51588 out.go:177] * Using the docker driver based on user configuration
	I1127 23:34:40.804056   51588 start.go:298] selected driver: docker
	I1127 23:34:40.804067   51588 start.go:902] validating driver "docker" against <nil>
	I1127 23:34:40.804076   51588 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:34:40.804799   51588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:34:40.856016   51588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-11-27 23:34:40.847474093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:34:40.856203   51588 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:34:40.856440   51588 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:34:40.858342   51588 out.go:177] * Using Docker driver with root privileges
	I1127 23:34:40.859972   51588 cni.go:84] Creating CNI manager for ""
	I1127 23:34:40.859999   51588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:34:40.860021   51588 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:34:40.860039   51588 start_flags.go:323] config:
	{Name:ingress-addon-legacy-719415 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-719415 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:34:40.861715   51588 out.go:177] * Starting control plane node ingress-addon-legacy-719415 in cluster ingress-addon-legacy-719415
	I1127 23:34:40.863229   51588 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:34:40.864739   51588 out.go:177] * Pulling base image ...
	I1127 23:34:40.866118   51588 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:34:40.866139   51588 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:34:40.881473   51588 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:34:40.881500   51588 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 23:34:40.897960   51588 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1127 23:34:40.897990   51588 cache.go:56] Caching tarball of preloaded images
	I1127 23:34:40.898168   51588 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:34:40.900013   51588 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1127 23:34:40.901408   51588 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:34:40.935507   51588 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1127 23:34:44.162083   51588 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:34:44.162177   51588 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:34:45.165778   51588 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1127 23:34:45.166207   51588 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/config.json ...
	I1127 23:34:45.166248   51588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/config.json: {Name:mk12503176a1a40c603ba17be7aed9b0f3fbb44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:34:45.166448   51588 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:34:45.166483   51588 start.go:365] acquiring machines lock for ingress-addon-legacy-719415: {Name:mk17361dfe890f1c07da37b69939a40c4881c1a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:34:45.166547   51588 start.go:369] acquired machines lock for "ingress-addon-legacy-719415" in 48.17µs
	I1127 23:34:45.166570   51588 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-719415 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-719415 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:34:45.166717   51588 start.go:125] createHost starting for "" (driver="docker")
	I1127 23:34:45.169013   51588 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1127 23:34:45.169236   51588 start.go:159] libmachine.API.Create for "ingress-addon-legacy-719415" (driver="docker")
	I1127 23:34:45.169272   51588 client.go:168] LocalClient.Create starting
	I1127 23:34:45.169333   51588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem
	I1127 23:34:45.169374   51588 main.go:141] libmachine: Decoding PEM data...
	I1127 23:34:45.169402   51588 main.go:141] libmachine: Parsing certificate...
	I1127 23:34:45.169463   51588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem
	I1127 23:34:45.169491   51588 main.go:141] libmachine: Decoding PEM data...
	I1127 23:34:45.169512   51588 main.go:141] libmachine: Parsing certificate...
	I1127 23:34:45.169837   51588 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-719415 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 23:34:45.185368   51588 cli_runner.go:211] docker network inspect ingress-addon-legacy-719415 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 23:34:45.185444   51588 network_create.go:281] running [docker network inspect ingress-addon-legacy-719415] to gather additional debugging logs...
	I1127 23:34:45.185464   51588 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-719415
	W1127 23:34:45.200492   51588 cli_runner.go:211] docker network inspect ingress-addon-legacy-719415 returned with exit code 1
	I1127 23:34:45.200533   51588 network_create.go:284] error running [docker network inspect ingress-addon-legacy-719415]: docker network inspect ingress-addon-legacy-719415: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-719415 not found
	I1127 23:34:45.200550   51588 network_create.go:286] output of [docker network inspect ingress-addon-legacy-719415]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-719415 not found
	
	** /stderr **
	I1127 23:34:45.200640   51588 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:34:45.217163   51588 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00063f260}
	I1127 23:34:45.217206   51588 network_create.go:124] attempt to create docker network ingress-addon-legacy-719415 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1127 23:34:45.217248   51588 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-719415 ingress-addon-legacy-719415
	I1127 23:34:45.270650   51588 network_create.go:108] docker network ingress-addon-legacy-719415 192.168.49.0/24 created
	I1127 23:34:45.270686   51588 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-719415" container
	I1127 23:34:45.270738   51588 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:34:45.285290   51588 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-719415 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-719415 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:34:45.301709   51588 oci.go:103] Successfully created a docker volume ingress-addon-legacy-719415
	I1127 23:34:45.301791   51588 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-719415-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-719415 --entrypoint /usr/bin/test -v ingress-addon-legacy-719415:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:34:47.047015   51588 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-719415-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-719415 --entrypoint /usr/bin/test -v ingress-addon-legacy-719415:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (1.745171912s)
	I1127 23:34:47.047043   51588 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-719415
	I1127 23:34:47.047057   51588 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:34:47.047075   51588 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:34:47.047127   51588 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-719415:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:34:52.344330   51588 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-719415:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.297153075s)
	I1127 23:34:52.344363   51588 kic.go:203] duration metric: took 5.297286 seconds to extract preloaded images to volume
	W1127 23:34:52.344494   51588 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:34:52.344589   51588 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:34:52.394918   51588 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-719415 --name ingress-addon-legacy-719415 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-719415 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-719415 --network ingress-addon-legacy-719415 --ip 192.168.49.2 --volume ingress-addon-legacy-719415:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:34:52.699756   51588 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-719415 --format={{.State.Running}}
	I1127 23:34:52.716600   51588 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-719415 --format={{.State.Status}}
	I1127 23:34:52.733671   51588 cli_runner.go:164] Run: docker exec ingress-addon-legacy-719415 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:34:52.812153   51588 oci.go:144] the created container "ingress-addon-legacy-719415" has a running status.
	I1127 23:34:52.812194   51588 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa...
	I1127 23:34:52.904784   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 23:34:52.904844   51588 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:34:52.924623   51588 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-719415 --format={{.State.Status}}
	I1127 23:34:52.940733   51588 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:34:52.940751   51588 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-719415 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:34:53.006690   51588 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-719415 --format={{.State.Status}}
	I1127 23:34:53.026595   51588 machine.go:88] provisioning docker machine ...
	I1127 23:34:53.026636   51588 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-719415"
	I1127 23:34:53.026705   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:34:53.046163   51588 main.go:141] libmachine: Using SSH client type: native
	I1127 23:34:53.046511   51588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1127 23:34:53.046528   51588 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-719415 && echo "ingress-addon-legacy-719415" | sudo tee /etc/hostname
	I1127 23:34:53.047098   51588 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57390->127.0.0.1:32787: read: connection reset by peer
	I1127 23:34:56.180187   51588 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-719415
	
	I1127 23:34:56.180281   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:34:56.199334   51588 main.go:141] libmachine: Using SSH client type: native
	I1127 23:34:56.199784   51588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1127 23:34:56.199820   51588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-719415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-719415/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-719415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:34:56.321801   51588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:34:56.321829   51588 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4554/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4554/.minikube}
	I1127 23:34:56.321870   51588 ubuntu.go:177] setting up certificates
	I1127 23:34:56.321881   51588 provision.go:83] configureAuth start
	I1127 23:34:56.321930   51588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-719415
	I1127 23:34:56.337439   51588 provision.go:138] copyHostCerts
	I1127 23:34:56.337478   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem
	I1127 23:34:56.337505   51588 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem, removing ...
	I1127 23:34:56.337517   51588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem
	I1127 23:34:56.337596   51588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem (1078 bytes)
	I1127 23:34:56.337669   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem
	I1127 23:34:56.337690   51588 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem, removing ...
	I1127 23:34:56.337698   51588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem
	I1127 23:34:56.337726   51588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem (1123 bytes)
	I1127 23:34:56.337774   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem
	I1127 23:34:56.337792   51588 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem, removing ...
	I1127 23:34:56.337799   51588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem
	I1127 23:34:56.337833   51588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem (1679 bytes)
	I1127 23:34:56.337887   51588 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-719415 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-719415]
	I1127 23:34:56.396065   51588 provision.go:172] copyRemoteCerts
	I1127 23:34:56.396124   51588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:34:56.396163   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:34:56.412028   51588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa Username:docker}
	I1127 23:34:56.502016   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:34:56.502096   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:34:56.523262   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:34:56.523329   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1127 23:34:56.542866   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:34:56.542924   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:34:56.563096   51588 provision.go:86] duration metric: configureAuth took 241.204345ms
	I1127 23:34:56.563125   51588 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:34:56.563290   51588 config.go:182] Loaded profile config "ingress-addon-legacy-719415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1127 23:34:56.563392   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:34:56.579649   51588 main.go:141] libmachine: Using SSH client type: native
	I1127 23:34:56.580119   51588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1127 23:34:56.580146   51588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:34:56.809616   51588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:34:56.809638   51588 machine.go:91] provisioned docker machine in 3.783019641s
	I1127 23:34:56.809647   51588 client.go:171] LocalClient.Create took 11.64036478s
	I1127 23:34:56.809663   51588 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-719415" took 11.640427331s
	I1127 23:34:56.809669   51588 start.go:300] post-start starting for "ingress-addon-legacy-719415" (driver="docker")
	I1127 23:34:56.809678   51588 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:34:56.809728   51588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:34:56.809770   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:34:56.825972   51588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa Username:docker}
	I1127 23:34:56.914522   51588 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:34:56.917525   51588 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:34:56.917559   51588 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:34:56.917567   51588 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:34:56.917573   51588 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:34:56.917582   51588 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/addons for local assets ...
	I1127 23:34:56.917625   51588 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/files for local assets ...
	I1127 23:34:56.917689   51588 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> 113062.pem in /etc/ssl/certs
	I1127 23:34:56.917700   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> /etc/ssl/certs/113062.pem
	I1127 23:34:56.917779   51588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:34:56.925136   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem --> /etc/ssl/certs/113062.pem (1708 bytes)
	I1127 23:34:56.945318   51588 start.go:303] post-start completed in 135.637167ms
	I1127 23:34:56.945697   51588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-719415
	I1127 23:34:56.961255   51588 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/config.json ...
	I1127 23:34:56.961505   51588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:34:56.961556   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:34:56.977479   51588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa Username:docker}
	I1127 23:34:57.062597   51588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:34:57.066322   51588 start.go:128] duration metric: createHost completed in 11.89959285s
	I1127 23:34:57.066343   51588 start.go:83] releasing machines lock for "ingress-addon-legacy-719415", held for 11.899783243s
	I1127 23:34:57.066410   51588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-719415
	I1127 23:34:57.082041   51588 ssh_runner.go:195] Run: cat /version.json
	I1127 23:34:57.082109   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:34:57.082149   51588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:34:57.082211   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:34:57.098276   51588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa Username:docker}
	I1127 23:34:57.100428   51588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa Username:docker}
	I1127 23:34:57.181426   51588 ssh_runner.go:195] Run: systemctl --version
	I1127 23:34:57.272781   51588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:34:57.407545   51588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:34:57.411601   51588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:34:57.428815   51588 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:34:57.428887   51588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:34:57.454635   51588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:34:57.454657   51588 start.go:472] detecting cgroup driver to use...
	I1127 23:34:57.454689   51588 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:34:57.454739   51588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:34:57.468273   51588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:34:57.477526   51588 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:34:57.477571   51588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:34:57.488575   51588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:34:57.500613   51588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:34:57.569331   51588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:34:57.640657   51588 docker.go:219] disabling docker service ...
	I1127 23:34:57.640722   51588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:34:57.657260   51588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:34:57.667040   51588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:34:57.737820   51588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:34:57.824583   51588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:34:57.834968   51588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:34:57.848872   51588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1127 23:34:57.848929   51588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:34:57.857278   51588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:34:57.857353   51588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:34:57.865529   51588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:34:57.873744   51588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:34:57.882180   51588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:34:57.889813   51588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:34:57.896819   51588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:34:57.904427   51588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:34:57.972378   51588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:34:58.079775   51588 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:34:58.079845   51588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:34:58.083038   51588 start.go:540] Will wait 60s for crictl version
	I1127 23:34:58.083091   51588 ssh_runner.go:195] Run: which crictl
	I1127 23:34:58.085840   51588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:34:58.115874   51588 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 23:34:58.115945   51588 ssh_runner.go:195] Run: crio --version
	I1127 23:34:58.147491   51588 ssh_runner.go:195] Run: crio --version
	I1127 23:34:58.183600   51588 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1127 23:34:58.185234   51588 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-719415 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:34:58.201778   51588 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1127 23:34:58.205317   51588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:34:58.215726   51588 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:34:58.215784   51588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:34:58.258224   51588 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1127 23:34:58.258279   51588 ssh_runner.go:195] Run: which lz4
	I1127 23:34:58.261388   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1127 23:34:58.261501   51588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1127 23:34:58.264458   51588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 23:34:58.264483   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1127 23:34:59.198193   51588 crio.go:444] Took 0.936739 seconds to copy over tarball
	I1127 23:34:59.198253   51588 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1127 23:35:01.431741   51588 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.233456687s)
	I1127 23:35:01.431772   51588 crio.go:451] Took 2.233555 seconds to extract the tarball
	I1127 23:35:01.431784   51588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1127 23:35:01.498741   51588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:35:01.529602   51588 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1127 23:35:01.529625   51588 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1127 23:35:01.529681   51588 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:35:01.529706   51588 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1127 23:35:01.529718   51588 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:35:01.529739   51588 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:35:01.529760   51588 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:35:01.529679   51588 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:35:01.529726   51588 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:35:01.529817   51588 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1127 23:35:01.531153   51588 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:35:01.531157   51588 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1127 23:35:01.531179   51588 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1127 23:35:01.531188   51588 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:35:01.531153   51588 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:35:01.531158   51588 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:35:01.531240   51588 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:35:01.531158   51588 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:35:01.741098   51588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:35:01.775498   51588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1127 23:35:01.788131   51588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1127 23:35:01.796676   51588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:35:01.824521   51588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:35:01.828136   51588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1127 23:35:01.829398   51588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:35:01.870295   51588 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:35:01.876603   51588 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1127 23:35:01.876657   51588 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1127 23:35:01.876675   51588 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1127 23:35:01.876697   51588 ssh_runner.go:195] Run: which crictl
	I1127 23:35:01.876719   51588 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:35:01.876745   51588 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1127 23:35:01.876815   51588 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:35:01.876852   51588 ssh_runner.go:195] Run: which crictl
	I1127 23:35:01.876765   51588 ssh_runner.go:195] Run: which crictl
	I1127 23:35:01.876719   51588 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1127 23:35:01.876911   51588 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1127 23:35:01.876953   51588 ssh_runner.go:195] Run: which crictl
	I1127 23:35:01.880694   51588 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1127 23:35:01.880829   51588 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:35:01.880870   51588 ssh_runner.go:195] Run: which crictl
	I1127 23:35:01.880787   51588 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1127 23:35:01.880947   51588 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:35:01.880975   51588 ssh_runner.go:195] Run: which crictl
	I1127 23:35:01.949493   51588 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1127 23:35:01.949545   51588 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:35:01.949571   51588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:35:01.949587   51588 ssh_runner.go:195] Run: which crictl
	I1127 23:35:01.949653   51588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:35:01.949712   51588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1127 23:35:01.949766   51588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1127 23:35:01.949805   51588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1127 23:35:01.949869   51588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:35:02.055874   51588 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1127 23:35:02.061860   51588 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1127 23:35:02.061908   51588 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1127 23:35:02.062013   51588 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:35:02.063306   51588 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1127 23:35:02.063374   51588 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1127 23:35:02.066657   51588 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1127 23:35:02.094493   51588 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1127 23:35:02.094543   51588 cache_images.go:92] LoadImages completed in 564.908876ms
	W1127 23:35:02.094594   51588 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1127 23:35:02.094650   51588 ssh_runner.go:195] Run: crio config
	I1127 23:35:02.165606   51588 cni.go:84] Creating CNI manager for ""
	I1127 23:35:02.165627   51588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:35:02.165644   51588 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:35:02.165669   51588 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-719415 NodeName:ingress-addon-legacy-719415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1127 23:35:02.165880   51588 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-719415"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:35:02.165969   51588 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-719415 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-719415 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:35:02.166026   51588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1127 23:35:02.174099   51588 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:35:02.174168   51588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:35:02.181492   51588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1127 23:35:02.196620   51588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1127 23:35:02.212382   51588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1127 23:35:02.227455   51588 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:35:02.230421   51588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:35:02.239748   51588 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415 for IP: 192.168.49.2
	I1127 23:35:02.239781   51588 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a5db8f506dfbef3cb84c722632fd59c37603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:02.239909   51588 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key
	I1127 23:35:02.239964   51588 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key
	I1127 23:35:02.240043   51588 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.key
	I1127 23:35:02.240069   51588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt with IP's: []
	I1127 23:35:02.397561   51588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt ...
	I1127 23:35:02.397598   51588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: {Name:mkaded23bdb4aab0f6e44983b0588e5ce5b1f1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:02.397756   51588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.key ...
	I1127 23:35:02.397768   51588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.key: {Name:mk5cf6009b7ce96e566e36b55812a8540800af03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:02.397832   51588 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.key.dd3b5fb2
	I1127 23:35:02.397853   51588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:35:02.569543   51588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.crt.dd3b5fb2 ...
	I1127 23:35:02.569578   51588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.crt.dd3b5fb2: {Name:mk1c07dcea96a5434b490c826962f6116c30d1b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:02.569740   51588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.key.dd3b5fb2 ...
	I1127 23:35:02.569753   51588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.key.dd3b5fb2: {Name:mkdcaae11461670d0dc4f37228e0bbe888e779f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:02.569819   51588 certs.go:337] copying /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.crt
	I1127 23:35:02.569904   51588 certs.go:341] copying /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.key
	I1127 23:35:02.569970   51588 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.key
	I1127 23:35:02.569982   51588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.crt with IP's: []
	I1127 23:35:02.699703   51588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.crt ...
	I1127 23:35:02.699733   51588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.crt: {Name:mk8f5224124d29ea5c68b534f59809d6dba68773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:02.699893   51588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.key ...
	I1127 23:35:02.699906   51588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.key: {Name:mk3c8322bf6079513fa8c9c03d8bf4967427eef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:02.699970   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 23:35:02.699991   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 23:35:02.700001   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 23:35:02.700017   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 23:35:02.700030   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:35:02.700040   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:35:02.700051   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:35:02.700064   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:35:02.700120   51588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem (1338 bytes)
	W1127 23:35:02.700152   51588 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306_empty.pem, impossibly tiny 0 bytes
	I1127 23:35:02.700162   51588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:35:02.700195   51588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:35:02.700217   51588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:35:02.700238   51588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem (1679 bytes)
	I1127 23:35:02.700276   51588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem (1708 bytes)
	I1127 23:35:02.700307   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:35:02.700319   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem -> /usr/share/ca-certificates/11306.pem
	I1127 23:35:02.700330   51588 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> /usr/share/ca-certificates/113062.pem
	I1127 23:35:02.700930   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:35:02.722993   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:35:02.743968   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:35:02.764457   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 23:35:02.785298   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:35:02.805621   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 23:35:02.825882   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:35:02.846673   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 23:35:02.866950   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:35:02.887624   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem --> /usr/share/ca-certificates/11306.pem (1338 bytes)
	I1127 23:35:02.908193   51588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem --> /usr/share/ca-certificates/113062.pem (1708 bytes)
	I1127 23:35:02.928891   51588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:35:02.943823   51588 ssh_runner.go:195] Run: openssl version
	I1127 23:35:02.948604   51588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:35:02.956835   51588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:35:02.959963   51588 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:25 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:35:02.960015   51588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:35:02.966050   51588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:35:02.973941   51588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11306.pem && ln -fs /usr/share/ca-certificates/11306.pem /etc/ssl/certs/11306.pem"
	I1127 23:35:02.981838   51588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11306.pem
	I1127 23:35:02.984816   51588 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:31 /usr/share/ca-certificates/11306.pem
	I1127 23:35:02.984859   51588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11306.pem
	I1127 23:35:02.991197   51588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11306.pem /etc/ssl/certs/51391683.0"
	I1127 23:35:02.999280   51588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113062.pem && ln -fs /usr/share/ca-certificates/113062.pem /etc/ssl/certs/113062.pem"
	I1127 23:35:03.007558   51588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113062.pem
	I1127 23:35:03.010614   51588 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:31 /usr/share/ca-certificates/113062.pem
	I1127 23:35:03.010652   51588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113062.pem
	I1127 23:35:03.016589   51588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/113062.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:35:03.024522   51588 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:35:03.027489   51588 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:35:03.027549   51588 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-719415 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-719415 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:35:03.027621   51588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:35:03.027677   51588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:35:03.060479   51588 cri.go:89] found id: ""
	I1127 23:35:03.060540   51588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:35:03.068247   51588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:35:03.076703   51588 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 23:35:03.076752   51588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:35:03.084121   51588 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:35:03.084175   51588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 23:35:03.126550   51588 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1127 23:35:03.126614   51588 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:35:03.163476   51588 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:35:03.163680   51588 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 23:35:03.163757   51588 kubeadm.go:322] OS: Linux
	I1127 23:35:03.163814   51588 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 23:35:03.163892   51588 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 23:35:03.163936   51588 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 23:35:03.163999   51588 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 23:35:03.164062   51588 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 23:35:03.164131   51588 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 23:35:03.230668   51588 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:35:03.230796   51588 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:35:03.230911   51588 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:35:03.406542   51588 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:35:03.407325   51588 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:35:03.407390   51588 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:35:03.481506   51588 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:35:03.484716   51588 out.go:204]   - Generating certificates and keys ...
	I1127 23:35:03.484862   51588 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:35:03.484986   51588 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:35:03.748196   51588 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:35:03.992422   51588 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:35:04.137696   51588 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:35:04.273656   51588 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:35:04.416076   51588 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:35:04.416268   51588 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-719415 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:35:04.539394   51588 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:35:04.539548   51588 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-719415 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:35:04.601968   51588 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:35:04.936144   51588 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:35:05.000623   51588 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:35:05.000715   51588 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:35:05.183793   51588 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:35:05.372667   51588 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:35:05.579677   51588 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:35:05.717058   51588 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:35:05.717643   51588 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:35:05.719474   51588 out.go:204]   - Booting up control plane ...
	I1127 23:35:05.719567   51588 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:35:05.722990   51588 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:35:05.724057   51588 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:35:05.726227   51588 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:35:05.729032   51588 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:35:12.231268   51588 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502220 seconds
	I1127 23:35:12.231451   51588 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:35:12.241780   51588 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:35:12.757366   51588 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:35:12.757519   51588 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-719415 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1127 23:35:13.264974   51588 kubeadm.go:322] [bootstrap-token] Using token: 9z821d.t1eactbrkro1mlqz
	I1127 23:35:13.266484   51588 out.go:204]   - Configuring RBAC rules ...
	I1127 23:35:13.266606   51588 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:35:13.269538   51588 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:35:13.275465   51588 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:35:13.277303   51588 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:35:13.278986   51588 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:35:13.280916   51588 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:35:13.289394   51588 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:35:13.512082   51588 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:35:13.678963   51588 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:35:13.679971   51588 kubeadm.go:322] 
	I1127 23:35:13.680055   51588 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:35:13.680077   51588 kubeadm.go:322] 
	I1127 23:35:13.680189   51588 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:35:13.680222   51588 kubeadm.go:322] 
	I1127 23:35:13.680256   51588 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:35:13.680359   51588 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:35:13.680454   51588 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:35:13.680476   51588 kubeadm.go:322] 
	I1127 23:35:13.680526   51588 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:35:13.680631   51588 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:35:13.680702   51588 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:35:13.680709   51588 kubeadm.go:322] 
	I1127 23:35:13.680813   51588 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:35:13.680900   51588 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:35:13.680912   51588 kubeadm.go:322] 
	I1127 23:35:13.680983   51588 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9z821d.t1eactbrkro1mlqz \
	I1127 23:35:13.681073   51588 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 \
	I1127 23:35:13.681097   51588 kubeadm.go:322]     --control-plane 
	I1127 23:35:13.681101   51588 kubeadm.go:322] 
	I1127 23:35:13.681168   51588 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:35:13.681183   51588 kubeadm.go:322] 
	I1127 23:35:13.681318   51588 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9z821d.t1eactbrkro1mlqz \
	I1127 23:35:13.681461   51588 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 
	I1127 23:35:13.683164   51588 kubeadm.go:322] W1127 23:35:03.126084    1378 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1127 23:35:13.683401   51588 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 23:35:13.683496   51588 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:35:13.683634   51588 kubeadm.go:322] W1127 23:35:05.722708    1378 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 23:35:13.683787   51588 kubeadm.go:322] W1127 23:35:05.723964    1378 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 23:35:13.683804   51588 cni.go:84] Creating CNI manager for ""
	I1127 23:35:13.683811   51588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:35:13.685843   51588 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 23:35:13.687462   51588 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:35:13.691112   51588 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1127 23:35:13.691128   51588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:35:13.706994   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:35:14.121848   51588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:35:14.121995   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:14.121997   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=ingress-addon-legacy-719415 minikube.k8s.io/updated_at=2023_11_27T23_35_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:14.245849   51588 ops.go:34] apiserver oom_adj: -16
	I1127 23:35:14.245860   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:14.311124   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:14.877563   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:15.377548   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:15.877930   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:16.377389   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:16.877513   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:17.377117   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:17.877712   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:18.377705   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:18.877652   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:19.377947   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:19.877889   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:20.377538   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:20.876863   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:21.377642   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:21.877682   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:22.377753   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:22.877565   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:23.377049   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:23.876935   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:24.377075   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:24.876867   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:25.377020   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:25.877736   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:26.377612   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:26.877364   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:27.377722   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:27.877045   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:28.377072   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:28.877214   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:29.377486   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:29.876961   51588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:35:30.007765   51588 kubeadm.go:1081] duration metric: took 15.885840202s to wait for elevateKubeSystemPrivileges.
	I1127 23:35:30.007813   51588 kubeadm.go:406] StartCluster complete in 26.980262678s
	I1127 23:35:30.007880   51588 settings.go:142] acquiring lock: {Name:mk8cf64b397eda9c03dbd178fc3aefd4ce90283a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:30.007951   51588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:35:30.008750   51588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/kubeconfig: {Name:mkeacc22f444b1cc5befda4f2c22a9fc66e858ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:30.008954   51588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:35:30.009071   51588 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 23:35:30.009170   51588 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-719415"
	I1127 23:35:30.009196   51588 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-719415"
	I1127 23:35:30.009243   51588 host.go:66] Checking if "ingress-addon-legacy-719415" exists ...
	I1127 23:35:30.009265   51588 config.go:182] Loaded profile config "ingress-addon-legacy-719415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1127 23:35:30.009332   51588 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-719415"
	I1127 23:35:30.009359   51588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-719415"
	I1127 23:35:30.009678   51588 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-719415 --format={{.State.Status}}
	I1127 23:35:30.009625   51588 kapi.go:59] client config for ingress-addon-legacy-719415: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:35:30.009731   51588 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-719415 --format={{.State.Status}}
	I1127 23:35:30.010365   51588 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 23:35:30.030544   51588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:35:30.029559   51588 kapi.go:59] client config for ingress-addon-legacy-719415: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:35:30.032251   51588 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:35:30.032269   51588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:35:30.032310   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:35:30.032431   51588 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-719415"
	I1127 23:35:30.032470   51588 host.go:66] Checking if "ingress-addon-legacy-719415" exists ...
	I1127 23:35:30.032841   51588 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-719415 --format={{.State.Status}}
	I1127 23:35:30.046898   51588 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-719415" context rescaled to 1 replicas
	I1127 23:35:30.046940   51588 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:35:30.048689   51588 out.go:177] * Verifying Kubernetes components...
	I1127 23:35:30.050644   51588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:35:30.052066   51588 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:35:30.052085   51588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:35:30.052146   51588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-719415
	I1127 23:35:30.052610   51588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa Username:docker}
	I1127 23:35:30.075462   51588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/ingress-addon-legacy-719415/id_rsa Username:docker}
	I1127 23:35:30.174734   51588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:35:30.175457   51588 kapi.go:59] client config for ingress-addon-legacy-719415: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:35:30.175796   51588 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-719415" to be "Ready" ...
	I1127 23:35:30.252001   51588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:35:30.264234   51588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:35:30.643927   51588 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1127 23:35:30.770001   51588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1127 23:35:30.771171   51588 addons.go:502] enable addons completed in 762.095933ms: enabled=[storage-provisioner default-storageclass]
	I1127 23:35:32.183926   51588 node_ready.go:58] node "ingress-addon-legacy-719415" has status "Ready":"False"
	I1127 23:35:34.183926   51588 node_ready.go:49] node "ingress-addon-legacy-719415" has status "Ready":"True"
	I1127 23:35:34.183952   51588 node_ready.go:38] duration metric: took 4.008132729s waiting for node "ingress-addon-legacy-719415" to be "Ready" ...
	I1127 23:35:34.183960   51588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:35:34.190911   51588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:36.199142   51588 pod_ready.go:102] pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-27 23:35:29 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1127 23:35:38.200623   51588 pod_ready.go:102] pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:35:40.201251   51588 pod_ready.go:102] pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:35:42.700473   51588 pod_ready.go:102] pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:35:45.200655   51588 pod_ready.go:102] pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:35:47.700570   51588 pod_ready.go:102] pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:35:48.201396   51588 pod_ready.go:92] pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace has status "Ready":"True"
	I1127 23:35:48.201424   51588 pod_ready.go:81] duration metric: took 14.010480882s waiting for pod "coredns-66bff467f8-6zbd5" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.201433   51588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-719415" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.205651   51588 pod_ready.go:92] pod "etcd-ingress-addon-legacy-719415" in "kube-system" namespace has status "Ready":"True"
	I1127 23:35:48.205672   51588 pod_ready.go:81] duration metric: took 4.233287ms waiting for pod "etcd-ingress-addon-legacy-719415" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.205684   51588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-719415" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.209905   51588 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-719415" in "kube-system" namespace has status "Ready":"True"
	I1127 23:35:48.209926   51588 pod_ready.go:81] duration metric: took 4.23595ms waiting for pod "kube-apiserver-ingress-addon-legacy-719415" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.209934   51588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-719415" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.214046   51588 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-719415" in "kube-system" namespace has status "Ready":"True"
	I1127 23:35:48.214087   51588 pod_ready.go:81] duration metric: took 4.145604ms waiting for pod "kube-controller-manager-ingress-addon-legacy-719415" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.214107   51588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2xtv" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.218191   51588 pod_ready.go:92] pod "kube-proxy-j2xtv" in "kube-system" namespace has status "Ready":"True"
	I1127 23:35:48.218212   51588 pod_ready.go:81] duration metric: took 4.097856ms waiting for pod "kube-proxy-j2xtv" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.218220   51588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-719415" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.396678   51588 request.go:629] Waited for 178.36951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-719415
	I1127 23:35:48.596900   51588 request.go:629] Waited for 197.393877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-719415
	I1127 23:35:48.599521   51588 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-719415" in "kube-system" namespace has status "Ready":"True"
	I1127 23:35:48.599547   51588 pod_ready.go:81] duration metric: took 381.319419ms waiting for pod "kube-scheduler-ingress-addon-legacy-719415" in "kube-system" namespace to be "Ready" ...
	I1127 23:35:48.599561   51588 pod_ready.go:38] duration metric: took 14.415590678s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:35:48.599584   51588 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:35:48.599638   51588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:35:48.609956   51588 api_server.go:72] duration metric: took 18.562970406s to wait for apiserver process to appear ...
	I1127 23:35:48.609983   51588 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:35:48.610007   51588 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1127 23:35:48.614659   51588 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1127 23:35:48.615449   51588 api_server.go:141] control plane version: v1.18.20
	I1127 23:35:48.615471   51588 api_server.go:131] duration metric: took 5.482434ms to wait for apiserver health ...
	I1127 23:35:48.615479   51588 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:35:48.796916   51588 request.go:629] Waited for 181.364726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:35:48.802183   51588 system_pods.go:59] 8 kube-system pods found
	I1127 23:35:48.802217   51588 system_pods.go:61] "coredns-66bff467f8-6zbd5" [a42deaf9-cf04-4341-b932-329d2a835b1f] Running
	I1127 23:35:48.802224   51588 system_pods.go:61] "etcd-ingress-addon-legacy-719415" [1cb330fc-1e79-41f6-9dd1-e920b63c4433] Running
	I1127 23:35:48.802228   51588 system_pods.go:61] "kindnet-zwmgf" [01339eaf-1278-48a8-8b36-edc15c39e969] Running
	I1127 23:35:48.802232   51588 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-719415" [0d51480c-31d3-4933-94c1-825e0ad5cbd4] Running
	I1127 23:35:48.802236   51588 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-719415" [6a6c1f24-7e71-457f-bcd1-f3a1bb1f3ecb] Running
	I1127 23:35:48.802240   51588 system_pods.go:61] "kube-proxy-j2xtv" [545b4f50-ee02-4ff2-959a-5e3dc4790e93] Running
	I1127 23:35:48.802247   51588 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-719415" [350c21ff-6177-400a-b63f-f2372e91ae49] Running
	I1127 23:35:48.802254   51588 system_pods.go:61] "storage-provisioner" [3bf02225-3d2b-44d0-a864-e4154c83c6ac] Running
	I1127 23:35:48.802260   51588 system_pods.go:74] duration metric: took 186.776013ms to wait for pod list to return data ...
	I1127 23:35:48.802268   51588 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:35:48.996712   51588 request.go:629] Waited for 194.372243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:35:48.999273   51588 default_sa.go:45] found service account: "default"
	I1127 23:35:48.999302   51588 default_sa.go:55] duration metric: took 197.026223ms for default service account to be created ...
	I1127 23:35:48.999313   51588 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:35:49.196549   51588 request.go:629] Waited for 197.164518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:35:49.202756   51588 system_pods.go:86] 8 kube-system pods found
	I1127 23:35:49.202784   51588 system_pods.go:89] "coredns-66bff467f8-6zbd5" [a42deaf9-cf04-4341-b932-329d2a835b1f] Running
	I1127 23:35:49.202789   51588 system_pods.go:89] "etcd-ingress-addon-legacy-719415" [1cb330fc-1e79-41f6-9dd1-e920b63c4433] Running
	I1127 23:35:49.202793   51588 system_pods.go:89] "kindnet-zwmgf" [01339eaf-1278-48a8-8b36-edc15c39e969] Running
	I1127 23:35:49.202798   51588 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-719415" [0d51480c-31d3-4933-94c1-825e0ad5cbd4] Running
	I1127 23:35:49.202803   51588 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-719415" [6a6c1f24-7e71-457f-bcd1-f3a1bb1f3ecb] Running
	I1127 23:35:49.202807   51588 system_pods.go:89] "kube-proxy-j2xtv" [545b4f50-ee02-4ff2-959a-5e3dc4790e93] Running
	I1127 23:35:49.202811   51588 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-719415" [350c21ff-6177-400a-b63f-f2372e91ae49] Running
	I1127 23:35:49.202815   51588 system_pods.go:89] "storage-provisioner" [3bf02225-3d2b-44d0-a864-e4154c83c6ac] Running
	I1127 23:35:49.202821   51588 system_pods.go:126] duration metric: took 203.501927ms to wait for k8s-apps to be running ...
	I1127 23:35:49.202827   51588 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:35:49.202886   51588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:35:49.213572   51588 system_svc.go:56] duration metric: took 10.736786ms WaitForService to wait for kubelet.
	I1127 23:35:49.213612   51588 kubeadm.go:581] duration metric: took 19.166620179s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:35:49.213634   51588 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:35:49.395984   51588 request.go:629] Waited for 182.262507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1127 23:35:49.398816   51588 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 23:35:49.398842   51588 node_conditions.go:123] node cpu capacity is 8
	I1127 23:35:49.398852   51588 node_conditions.go:105] duration metric: took 185.212749ms to run NodePressure ...
	I1127 23:35:49.398863   51588 start.go:228] waiting for startup goroutines ...
	I1127 23:35:49.398869   51588 start.go:233] waiting for cluster config update ...
	I1127 23:35:49.398877   51588 start.go:242] writing updated cluster config ...
	I1127 23:35:49.399132   51588 ssh_runner.go:195] Run: rm -f paused
	I1127 23:35:49.446015   51588 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1127 23:35:49.448318   51588 out.go:177] 
	W1127 23:35:49.449940   51588 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1127 23:35:49.451411   51588 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1127 23:35:49.452826   51588 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-719415" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 23:38:39 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:39.261370162Z" level=info msg="Started container" PID=4930 containerID=658ebb3e14fe8273a64b2a93d6c19d3a1c3aefc8fc886021a335d39da3fb94e7 description=default/hello-world-app-5f5d8b66bb-7577h/hello-world-app id=e9e7a2c8-5b96-4d04-ab61-6555cef5672d name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=2e4c43e842881271807fe8ba36aaaf1bdcad7327671e27964c2c2445cc1e230e
	Nov 27 23:38:47 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:47.848331130Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=7214082b-c2d9-4060-8a2f-70a25d626c61 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 27 23:38:53 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:53.852758674Z" level=info msg="Stopping pod sandbox: 6273e9cca0df97c640fb7bceaf5d3c9b28d124b21a473a0130e9d15748bc9b9d" id=7cd531c7-84e4-4c51-8d5c-8539f96e3b67 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:53 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:53.853873284Z" level=info msg="Stopped pod sandbox: 6273e9cca0df97c640fb7bceaf5d3c9b28d124b21a473a0130e9d15748bc9b9d" id=7cd531c7-84e4-4c51-8d5c-8539f96e3b67 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:53 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:53.861509146Z" level=info msg="Stopping pod sandbox: 6273e9cca0df97c640fb7bceaf5d3c9b28d124b21a473a0130e9d15748bc9b9d" id=e09bad19-a0ad-449c-8f18-5def348571b7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:53 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:53.861578279Z" level=info msg="Stopped pod sandbox (already stopped): 6273e9cca0df97c640fb7bceaf5d3c9b28d124b21a473a0130e9d15748bc9b9d" id=e09bad19-a0ad-449c-8f18-5def348571b7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:54 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:54.639275322Z" level=info msg="Stopping container: 57712375d4705f651867c76bd8eb5183c15715c5618bff9d8646f088fc2ce102 (timeout: 2s)" id=7931a2e2-f5cb-4f1b-8936-18878c99bc3b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:38:54 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:54.641101456Z" level=info msg="Stopping container: 57712375d4705f651867c76bd8eb5183c15715c5618bff9d8646f088fc2ce102 (timeout: 2s)" id=97d38aa4-8b0c-43ec-b5f0-729f18ebb19c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:38:55 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:55.848023756Z" level=info msg="Stopping pod sandbox: 6273e9cca0df97c640fb7bceaf5d3c9b28d124b21a473a0130e9d15748bc9b9d" id=32005a9d-b0a3-41ba-afbb-563efd930ff1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:55 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:55.848092307Z" level=info msg="Stopped pod sandbox (already stopped): 6273e9cca0df97c640fb7bceaf5d3c9b28d124b21a473a0130e9d15748bc9b9d" id=32005a9d-b0a3-41ba-afbb-563efd930ff1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.648759455Z" level=warning msg="Stopping container 57712375d4705f651867c76bd8eb5183c15715c5618bff9d8646f088fc2ce102 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=7931a2e2-f5cb-4f1b-8936-18878c99bc3b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:38:56 ingress-addon-legacy-719415 conmon[3458]: conmon 57712375d4705f651867 <ninfo>: container 3470 exited with status 137
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.819018644Z" level=info msg="Stopped container 57712375d4705f651867c76bd8eb5183c15715c5618bff9d8646f088fc2ce102: ingress-nginx/ingress-nginx-controller-7fcf777cb7-kkcmb/controller" id=97d38aa4-8b0c-43ec-b5f0-729f18ebb19c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.819549342Z" level=info msg="Stopped container 57712375d4705f651867c76bd8eb5183c15715c5618bff9d8646f088fc2ce102: ingress-nginx/ingress-nginx-controller-7fcf777cb7-kkcmb/controller" id=7931a2e2-f5cb-4f1b-8936-18878c99bc3b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.819683724Z" level=info msg="Stopping pod sandbox: 18edb5cd87b4cff7319f1b93c304965c5302c4edabe7253b3f88c3000648506c" id=bce5506d-ac6e-446b-9d78-32296c481ccd name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.819944681Z" level=info msg="Stopping pod sandbox: 18edb5cd87b4cff7319f1b93c304965c5302c4edabe7253b3f88c3000648506c" id=56fbd094-19da-4c4b-a67b-b12a0a97a251 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.822580249Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-GPR3H7KHXXE4VMAU - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-YAEIRJOG7YGAKR3P - [0:0]\n-X KUBE-HP-GPR3H7KHXXE4VMAU\n-X KUBE-HP-YAEIRJOG7YGAKR3P\nCOMMIT\n"
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.823883434Z" level=info msg="Closing host port tcp:80"
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.823919819Z" level=info msg="Closing host port tcp:443"
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.824864746Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.824894350Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.825023409Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-kkcmb Namespace:ingress-nginx ID:18edb5cd87b4cff7319f1b93c304965c5302c4edabe7253b3f88c3000648506c UID:dca02579-910e-4dca-aa11-0d6739d69def NetNS:/var/run/netns/13905909-1f78-402b-99a0-6e828f1be1ba Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.825136809Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-kkcmb from CNI network \"kindnet\" (type=ptp)"
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.863459737Z" level=info msg="Stopped pod sandbox: 18edb5cd87b4cff7319f1b93c304965c5302c4edabe7253b3f88c3000648506c" id=bce5506d-ac6e-446b-9d78-32296c481ccd name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:38:56 ingress-addon-legacy-719415 crio[958]: time="2023-11-27 23:38:56.863587826Z" level=info msg="Stopped pod sandbox (already stopped): 18edb5cd87b4cff7319f1b93c304965c5302c4edabe7253b3f88c3000648506c" id=56fbd094-19da-4c4b-a67b-b12a0a97a251 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	658ebb3e14fe8       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            23 seconds ago      Running             hello-world-app           0                   2e4c43e842881       hello-world-app-5f5d8b66bb-7577h
	82ab926d09f77       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   9fb6de92970c2       nginx
	57712375d4705       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   18edb5cd87b4c       ingress-nginx-controller-7fcf777cb7-kkcmb
	329e455a58743       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   a10bc2e3112e4       ingress-nginx-admission-patch-cn7vb
	f5c137f4325fe       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   88726d1f3a2f8       ingress-nginx-admission-create-k2f6l
	1508ee3b986cd       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   111bd37cc9a00       coredns-66bff467f8-6zbd5
	d3b4515e980c2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   6acb09527359c       storage-provisioner
	703c855b0a6a5       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   6497d0da8acd7       kindnet-zwmgf
	11e0b058c5ef1       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   cc85c9dbb2945       kube-proxy-j2xtv
	3ecb3493de5f1       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   8561d1ab2f5ec       kube-apiserver-ingress-addon-legacy-719415
	f7002f0cc89cf       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   5082b80ee910a       kube-scheduler-ingress-addon-legacy-719415
	0ebc0b4f04b16       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   3555c4a006e6f       etcd-ingress-addon-legacy-719415
	a4f98912a5d3f       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   d87766db630e2       kube-controller-manager-ingress-addon-legacy-719415
	
	* 
	* ==> coredns [1508ee3b986cdc03e7703d6e5b93fe14f10c379cc4ee9384a72636e97a60ae49] <==
	* [INFO] 10.244.0.5:47810 - 50425 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007299183s
	[INFO] 10.244.0.5:47810 - 4360 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003234584s
	[INFO] 10.244.0.5:58049 - 29465 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003342047s
	[INFO] 10.244.0.5:34424 - 14283 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003593683s
	[INFO] 10.244.0.5:44886 - 17995 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003575898s
	[INFO] 10.244.0.5:49302 - 32487 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003523516s
	[INFO] 10.244.0.5:60973 - 21339 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003363544s
	[INFO] 10.244.0.5:39838 - 59742 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003713349s
	[INFO] 10.244.0.5:40778 - 20955 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003693603s
	[INFO] 10.244.0.5:39838 - 26518 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005699627s
	[INFO] 10.244.0.5:40778 - 43565 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005701779s
	[INFO] 10.244.0.5:47810 - 23381 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005874082s
	[INFO] 10.244.0.5:60973 - 44983 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005788326s
	[INFO] 10.244.0.5:58049 - 56476 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006021488s
	[INFO] 10.244.0.5:49302 - 34414 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005967724s
	[INFO] 10.244.0.5:39838 - 2877 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000087399s
	[INFO] 10.244.0.5:40778 - 26975 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00008757s
	[INFO] 10.244.0.5:60973 - 36414 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058173s
	[INFO] 10.244.0.5:34424 - 41337 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00612827s
	[INFO] 10.244.0.5:47810 - 56371 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000190676s
	[INFO] 10.244.0.5:44886 - 61021 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006195782s
	[INFO] 10.244.0.5:58049 - 57929 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058794s
	[INFO] 10.244.0.5:49302 - 13044 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004745s
	[INFO] 10.244.0.5:44886 - 6921 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055705s
	[INFO] 10.244.0.5:34424 - 28277 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045428s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-719415
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-719415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=ingress-addon-legacy-719415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_35_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:35:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-719415
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:38:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:38:44 +0000   Mon, 27 Nov 2023 23:35:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:38:44 +0000   Mon, 27 Nov 2023 23:35:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:38:44 +0000   Mon, 27 Nov 2023 23:35:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:38:44 +0000   Mon, 27 Nov 2023 23:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-719415
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 470bf9b85a3b4845a7d6d82597a1e4b4
	  System UUID:                142d3c31-ec2b-46b1-bce6-b98d2b4facc8
	  Boot ID:                    ccf6e8a7-9afe-448c-b481-9ad79744adaf
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7577h                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-6zbd5                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m33s
	  kube-system                 etcd-ingress-addon-legacy-719415                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-zwmgf                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m33s
	  kube-system                 kube-apiserver-ingress-addon-legacy-719415             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-719415    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-proxy-j2xtv                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kube-scheduler-ingress-addon-legacy-719415             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m49s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s  kubelet     Node ingress-addon-legacy-719415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s  kubelet     Node ingress-addon-legacy-719415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s  kubelet     Node ingress-addon-legacy-719415 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m32s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m29s  kubelet     Node ingress-addon-legacy-719415 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004918] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006765] FS-Cache: N-cookie d=000000005a04be6d{9p.inode} n=00000000c65f5879
	[  +0.008731] FS-Cache: N-key=[8] '8fa00f0200000000'
	[  +0.264322] FS-Cache: Duplicate cookie detected
	[  +0.004664] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006733] FS-Cache: O-cookie d=000000005a04be6d{9p.inode} n=00000000b6e3f4db
	[  +0.007353] FS-Cache: O-key=[8] '97a00f0200000000'
	[  +0.004961] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007979] FS-Cache: N-cookie d=000000005a04be6d{9p.inode} n=00000000bcfa7cd3
	[  +0.008706] FS-Cache: N-key=[8] '97a00f0200000000'
	[  +4.383695] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov27 23:36] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[  +1.007864] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[  +2.015744] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[  +4.159578] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[  +8.191130] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[ +16.126345] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[Nov27 23:37] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	
	* 
	* ==> etcd [0ebc0b4f04b1665191f62c164df62f302d140800a013309f2f2136c7007c4ac2] <==
	* raft2023/11/27 23:35:06 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-27 23:35:06.858135 W | auth: simple token is not cryptographically signed
	2023-11-27 23:35:06.861007 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-27 23:35:06.861128 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/27 23:35:06 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-27 23:35:06.861667 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-27 23:35:06.863829 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-27 23:35:06.863992 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-27 23:35:06.864053 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/11/27 23:35:07 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/27 23:35:07 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/27 23:35:07 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/27 23:35:07 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/27 23:35:07 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-27 23:35:07.852765 I | embed: ready to serve client requests
	2023-11-27 23:35:07.852880 I | etcdserver: published {Name:ingress-addon-legacy-719415 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-27 23:35:07.852936 I | embed: ready to serve client requests
	2023-11-27 23:35:07.853041 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-27 23:35:07.853420 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-27 23:35:07.853529 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-27 23:35:07.855610 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-27 23:35:07.855666 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-27 23:35:34.771109 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (125.020971ms) to execute
	2023-11-27 23:35:35.363856 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (164.03531ms) to execute
	2023-11-27 23:35:35.363990 W | etcdserver: request "header:<ID:8128025441315547103 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/k8s.io-minikube-hostpath.179b9f06a7f09904\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/k8s.io-minikube-hostpath.179b9f06a7f09904\" value_size:776 lease:8128025441315546641 >> failure:<>>" with result "size:16" took too long (101.132919ms) to execute
	
	* 
	* ==> kernel <==
	*  23:39:02 up 21 min,  0 users,  load average: 0.08, 0.60, 0.61
	Linux ingress-addon-legacy-719415 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [703c855b0a6a5a7b00cb3550d564ad8566e8f565a0a70aaf80df10deabf427ba] <==
	* I1127 23:36:53.134805       1 main.go:227] handling current node
	I1127 23:37:03.138206       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:37:03.138233       1 main.go:227] handling current node
	I1127 23:37:13.149420       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:37:13.149445       1 main.go:227] handling current node
	I1127 23:37:23.154747       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:37:23.154772       1 main.go:227] handling current node
	I1127 23:37:33.166793       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:37:33.166818       1 main.go:227] handling current node
	I1127 23:37:43.170340       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:37:43.170365       1 main.go:227] handling current node
	I1127 23:37:53.181126       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:37:53.181152       1 main.go:227] handling current node
	I1127 23:38:03.193196       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:38:03.193226       1 main.go:227] handling current node
	I1127 23:38:13.197412       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:38:13.197463       1 main.go:227] handling current node
	I1127 23:38:23.200784       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:38:23.200813       1 main.go:227] handling current node
	I1127 23:38:33.205194       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:38:33.205220       1 main.go:227] handling current node
	I1127 23:38:43.217149       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:38:43.217174       1 main.go:227] handling current node
	I1127 23:38:53.229359       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:38:53.229383       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [3ecb3493de5f1a306c21e19be754bdc18344dd09aa76047a75ae1886191668c1] <==
	* E1127 23:35:10.537642       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1127 23:35:10.742234       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1127 23:35:10.742255       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1127 23:35:10.742440       1 cache.go:39] Caches are synced for autoregister controller
	I1127 23:35:10.742297       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1127 23:35:10.742320       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1127 23:35:11.535621       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1127 23:35:11.535647       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1127 23:35:11.540303       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1127 23:35:11.543271       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1127 23:35:11.543288       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1127 23:35:11.817796       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 23:35:11.844788       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1127 23:35:11.957809       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1127 23:35:11.958740       1 controller.go:609] quota admission added evaluator for: endpoints
	I1127 23:35:11.961781       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1127 23:35:12.910414       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1127 23:35:13.504033       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1127 23:35:13.669111       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1127 23:35:13.803238       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 23:35:29.312693       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1127 23:35:29.744209       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1127 23:35:50.104337       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1127 23:36:17.485368       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1127 23:38:53.860843       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc00b341ac0), encoder:(*versioning.codec)(0xc006de6320), buf:(*bytes.Buffer)(0xc00a26ca50)})
	
	* 
	* ==> kube-controller-manager [a4f98912a5d3f437a1d126dcae2c2f599219ab46404ee642b94130c19600dfb1] <==
	* I1127 23:35:29.756408       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"086d4683-1cde-418f-a3b8-c0502bccfd45", APIVersion:"apps/v1", ResourceVersion:"330", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-6zbd5
	I1127 23:35:29.842342       1 shared_informer.go:230] Caches are synced for service account 
	I1127 23:35:29.856138       1 shared_informer.go:230] Caches are synced for attach detach 
	I1127 23:35:29.866774       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1127 23:35:29.942262       1 shared_informer.go:230] Caches are synced for stateful set 
	I1127 23:35:29.942849       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1127 23:35:29.942997       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1127 23:35:29.943018       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1127 23:35:29.943130       1 shared_informer.go:230] Caches are synced for resource quota 
	I1127 23:35:29.943691       1 shared_informer.go:230] Caches are synced for expand 
	I1127 23:35:29.943496       1 shared_informer.go:230] Caches are synced for resource quota 
	I1127 23:35:30.050319       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c7bffef6-6a48-4c21-b391-7ef42ad545b1", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1127 23:35:30.063815       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"086d4683-1cde-418f-a3b8-c0502bccfd45", APIVersion:"apps/v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-9hz2b
	I1127 23:35:30.143974       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1127 23:35:30.144110       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1127 23:35:34.292665       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1127 23:35:50.095774       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a4033258-688f-43eb-a1fa-82d8416ac13a", APIVersion:"apps/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1127 23:35:50.145164       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"2d76f904-a808-4f45-8a45-baf41b17e3a5", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-kkcmb
	I1127 23:35:50.155201       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"db2115b2-f260-4254-b42b-839396732aed", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-k2f6l
	I1127 23:35:50.170476       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ec685e45-cebd-45ca-830f-7a06bf63bf51", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-cn7vb
	I1127 23:35:53.054857       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"db2115b2-f260-4254-b42b-839396732aed", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 23:35:54.057527       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ec685e45-cebd-45ca-830f-7a06bf63bf51", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 23:38:37.435081       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"eb6a7aba-1424-43c8-9029-02b1e4e1dfef", APIVersion:"apps/v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1127 23:38:37.444056       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"fd9a7bad-a581-4a5d-a572-1259bcd27ce6", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7577h
	E1127 23:38:59.325354       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-zddrq" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [11e0b058c5ef10d3a27a508f37776725ee3f67dde0c676fe2cb9290332be4611] <==
	* W1127 23:35:30.459596       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1127 23:35:30.467458       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1127 23:35:30.467505       1 server_others.go:186] Using iptables Proxier.
	I1127 23:35:30.467752       1 server.go:583] Version: v1.18.20
	I1127 23:35:30.470388       1 config.go:133] Starting endpoints config controller
	I1127 23:35:30.470470       1 config.go:315] Starting service config controller
	I1127 23:35:30.470570       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1127 23:35:30.470494       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1127 23:35:30.475293       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1127 23:35:30.570874       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f7002f0cc89cfc298c5e9b346fccdaa662af9dac31ba6a7d14747d8fa83ca2ef] <==
	* I1127 23:35:10.751937       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 23:35:10.751963       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 23:35:10.754303       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1127 23:35:10.754450       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:35:10.754459       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:35:10.754478       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1127 23:35:10.756819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:35:10.756984       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:35:10.757011       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:35:10.757150       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:35:10.757180       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 23:35:10.757992       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:35:10.758435       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:35:10.758542       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 23:35:10.758600       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:35:10.759073       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:35:10.759173       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:35:10.759251       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:35:11.573910       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:35:11.644326       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:35:11.672370       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:35:11.686625       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:35:11.692750       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1127 23:35:14.155097       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1127 23:35:29.771476       1 factory.go:503] pod: kube-system/coredns-66bff467f8-6zbd5 is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Nov 27 23:38:22 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:22.848985    1860 pod_workers.go:191] Error syncing pod 611cc398-6d55-4506-8d2b-a9d149eb10dd ("kube-ingress-dns-minikube_kube-system(611cc398-6d55-4506-8d2b-a9d149eb10dd)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 27 23:38:36 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:36.848753    1860 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:38:36 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:36.848808    1860 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:38:36 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:36.848874    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:38:36 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:36.848911    1860 pod_workers.go:191] Error syncing pod 611cc398-6d55-4506-8d2b-a9d149eb10dd ("kube-ingress-dns-minikube_kube-system(611cc398-6d55-4506-8d2b-a9d149eb10dd)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 27 23:38:37 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:37.449133    1860 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 27 23:38:37 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:37.566905    1860 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9trm5" (UniqueName: "kubernetes.io/secret/096e6e7e-06dd-4355-a86a-7f4587eda11a-default-token-9trm5") pod "hello-world-app-5f5d8b66bb-7577h" (UID: "096e6e7e-06dd-4355-a86a-7f4587eda11a")
	Nov 27 23:38:37 ingress-addon-legacy-719415 kubelet[1860]: W1127 23:38:37.794862    1860 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/ec1d92d113b71ebb603305ba9e43f5ac96daf7eacce4afee88ca4224a5833610/crio-2e4c43e842881271807fe8ba36aaaf1bdcad7327671e27964c2c2445cc1e230e WatchSource:0}: Error finding container 2e4c43e842881271807fe8ba36aaaf1bdcad7327671e27964c2c2445cc1e230e: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc001175ce0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Nov 27 23:38:47 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:47.848695    1860 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:38:47 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:47.848740    1860 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:38:47 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:47.848801    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:38:47 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:47.848838    1860 pod_workers.go:191] Error syncing pod 611cc398-6d55-4506-8d2b-a9d149eb10dd ("kube-ingress-dns-minikube_kube-system(611cc398-6d55-4506-8d2b-a9d149eb10dd)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 27 23:38:53 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:53.204377    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-pjcmv" (UniqueName: "kubernetes.io/secret/611cc398-6d55-4506-8d2b-a9d149eb10dd-minikube-ingress-dns-token-pjcmv") pod "611cc398-6d55-4506-8d2b-a9d149eb10dd" (UID: "611cc398-6d55-4506-8d2b-a9d149eb10dd")
	Nov 27 23:38:53 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:53.206280    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611cc398-6d55-4506-8d2b-a9d149eb10dd-minikube-ingress-dns-token-pjcmv" (OuterVolumeSpecName: "minikube-ingress-dns-token-pjcmv") pod "611cc398-6d55-4506-8d2b-a9d149eb10dd" (UID: "611cc398-6d55-4506-8d2b-a9d149eb10dd"). InnerVolumeSpecName "minikube-ingress-dns-token-pjcmv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:38:53 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:53.304699    1860 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-pjcmv" (UniqueName: "kubernetes.io/secret/611cc398-6d55-4506-8d2b-a9d149eb10dd-minikube-ingress-dns-token-pjcmv") on node "ingress-addon-legacy-719415" DevicePath ""
	Nov 27 23:38:54 ingress-addon-legacy-719415 kubelet[1860]: W1127 23:38:54.344748    1860 pod_container_deletor.go:77] Container "6273e9cca0df97c640fb7bceaf5d3c9b28d124b21a473a0130e9d15748bc9b9d" not found in pod's containers
	Nov 27 23:38:54 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:54.640500    1860 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-kkcmb.179b9f35177beef1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-kkcmb", UID:"dca02579-910e-4dca-aa11-0d6739d69def", APIVersion:"v1", ResourceVersion:"474", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-719415"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1516803a60fe2f1, ext:221166542383, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1516803a60fe2f1, ext:221166542383, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-kkcmb.179b9f35177beef1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 23:38:54 ingress-addon-legacy-719415 kubelet[1860]: E1127 23:38:54.643550    1860 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-kkcmb.179b9f35177beef1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-kkcmb", UID:"dca02579-910e-4dca-aa11-0d6739d69def", APIVersion:"v1", ResourceVersion:"474", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-719415"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1516803a60fe2f1, ext:221166542383, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1516803a6324219, ext:221168794973, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-kkcmb.179b9f35177beef1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 23:38:57 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:57.248920    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-hb8tg" (UniqueName: "kubernetes.io/secret/dca02579-910e-4dca-aa11-0d6739d69def-ingress-nginx-token-hb8tg") pod "dca02579-910e-4dca-aa11-0d6739d69def" (UID: "dca02579-910e-4dca-aa11-0d6739d69def")
	Nov 27 23:38:57 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:57.248966    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/dca02579-910e-4dca-aa11-0d6739d69def-webhook-cert") pod "dca02579-910e-4dca-aa11-0d6739d69def" (UID: "dca02579-910e-4dca-aa11-0d6739d69def")
	Nov 27 23:38:57 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:57.250790    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca02579-910e-4dca-aa11-0d6739d69def-ingress-nginx-token-hb8tg" (OuterVolumeSpecName: "ingress-nginx-token-hb8tg") pod "dca02579-910e-4dca-aa11-0d6739d69def" (UID: "dca02579-910e-4dca-aa11-0d6739d69def"). InnerVolumeSpecName "ingress-nginx-token-hb8tg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:38:57 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:57.251032    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca02579-910e-4dca-aa11-0d6739d69def-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dca02579-910e-4dca-aa11-0d6739d69def" (UID: "dca02579-910e-4dca-aa11-0d6739d69def"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:38:57 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:57.349171    1860 reconciler.go:319] Volume detached for volume "ingress-nginx-token-hb8tg" (UniqueName: "kubernetes.io/secret/dca02579-910e-4dca-aa11-0d6739d69def-ingress-nginx-token-hb8tg") on node "ingress-addon-legacy-719415" DevicePath ""
	Nov 27 23:38:57 ingress-addon-legacy-719415 kubelet[1860]: I1127 23:38:57.349196    1860 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/dca02579-910e-4dca-aa11-0d6739d69def-webhook-cert") on node "ingress-addon-legacy-719415" DevicePath ""
	Nov 27 23:38:57 ingress-addon-legacy-719415 kubelet[1860]: W1127 23:38:57.350422    1860 pod_container_deletor.go:77] Container "18edb5cd87b4cff7319f1b93c304965c5302c4edabe7253b3f88c3000648506c" not found in pod's containers
	
	* 
	* ==> storage-provisioner [d3b4515e980c20ee08a7227d9666ac2cf9cbebce603fdcb726cfc6444b988d63] <==
	* I1127 23:35:35.060398       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 23:35:35.067933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 23:35:35.068003       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 23:35:35.198701       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 23:35:35.198850       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ad9b4c3-60c5-42dc-a78c-cc8ef6e7271a", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-719415_140fbc73-6e00-4506-8eac-7736a53b3fa2 became leader
	I1127 23:35:35.198906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-719415_140fbc73-6e00-4506-8eac-7736a53b3fa2!
	I1127 23:35:35.299801       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-719415_140fbc73-6e00-4506-8eac-7736a53b3fa2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-719415 -n ingress-addon-legacy-719415
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-719415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-8pbpd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-8pbpd -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-8pbpd -- sh -c "ping -c 1 192.168.58.1": exit status 1 (182.549219ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-8pbpd): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-zp72z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-zp72z -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-zp72z -- sh -c "ping -c 1 192.168.58.1": exit status 1 (185.56543ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-zp72z): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-595051
helpers_test.go:235: (dbg) docker inspect multinode-595051:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b",
	        "Created": "2023-11-27T23:44:09.216721803Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98170,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:44:09.49257017Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/hosts",
	        "LogPath": "/var/lib/docker/containers/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b-json.log",
	        "Name": "/multinode-595051",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-595051:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-595051",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5154fab00ee53ed60b112398e175a7d8759476751ebf589483db691f29913831-init/diff:/var/lib/docker/overlay2/7130e71395072cd8dcd718fa28933a7b57b5714a10c6614947d04756418543ae/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5154fab00ee53ed60b112398e175a7d8759476751ebf589483db691f29913831/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5154fab00ee53ed60b112398e175a7d8759476751ebf589483db691f29913831/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5154fab00ee53ed60b112398e175a7d8759476751ebf589483db691f29913831/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-595051",
	                "Source": "/var/lib/docker/volumes/multinode-595051/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-595051",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-595051",
	                "name.minikube.sigs.k8s.io": "multinode-595051",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8bcdc9fdac36f296fdc22f0c507325642728d40a0216e45070a3c13032389e40",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8bcdc9fdac36",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-595051": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c6c4601dedfe",
	                        "multinode-595051"
	                    ],
	                    "NetworkID": "7b52a55090c8a82cc640245bd7cbbbb4186b38ab95acfa02faf1bcaae7283a9d",
	                    "EndpointID": "d947d5fb8a04bfbfe705470ca86c6c2e2e26b3b014fb2fa2ade7a9a99e636012",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-595051 -n multinode-595051
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-595051 logs -n 25: (1.185166946s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-272109                           | mount-start-2-272109 | jenkins | v1.32.0 | 27 Nov 23 23:43 UTC | 27 Nov 23 23:43 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-272109 ssh -- ls                    | mount-start-2-272109 | jenkins | v1.32.0 | 27 Nov 23 23:43 UTC | 27 Nov 23 23:43 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-259749                           | mount-start-1-259749 | jenkins | v1.32.0 | 27 Nov 23 23:43 UTC | 27 Nov 23 23:43 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-272109 ssh -- ls                    | mount-start-2-272109 | jenkins | v1.32.0 | 27 Nov 23 23:43 UTC | 27 Nov 23 23:43 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-272109                           | mount-start-2-272109 | jenkins | v1.32.0 | 27 Nov 23 23:43 UTC | 27 Nov 23 23:43 UTC |
	| start   | -p mount-start-2-272109                           | mount-start-2-272109 | jenkins | v1.32.0 | 27 Nov 23 23:43 UTC | 27 Nov 23 23:43 UTC |
	| ssh     | mount-start-2-272109 ssh -- ls                    | mount-start-2-272109 | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-272109                           | mount-start-2-272109 | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	| delete  | -p mount-start-1-259749                           | mount-start-1-259749 | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	| start   | -p multinode-595051                               | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:45 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- apply -f                   | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- rollout                    | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- get pods -o                | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- get pods -o                | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | busybox-5bc68d56bd-8pbpd --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | busybox-5bc68d56bd-zp72z --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | busybox-5bc68d56bd-8pbpd --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | busybox-5bc68d56bd-zp72z --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | busybox-5bc68d56bd-8pbpd -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | busybox-5bc68d56bd-zp72z -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- get pods -o                | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | busybox-5bc68d56bd-8pbpd                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC |                     |
	|         | busybox-5bc68d56bd-8pbpd -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC | 27 Nov 23 23:45 UTC |
	|         | busybox-5bc68d56bd-zp72z                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-595051 -- exec                       | multinode-595051     | jenkins | v1.32.0 | 27 Nov 23 23:45 UTC |                     |
	|         | busybox-5bc68d56bd-zp72z -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:44:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:44:02.988149   97564 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:44:02.988300   97564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:44:02.988310   97564 out.go:309] Setting ErrFile to fd 2...
	I1127 23:44:02.988314   97564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:44:02.988553   97564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:44:02.989171   97564 out.go:303] Setting JSON to false
	I1127 23:44:02.990532   97564 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1595,"bootTime":1701127048,"procs":708,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:44:02.990596   97564 start.go:138] virtualization: kvm guest
	I1127 23:44:02.993075   97564 out.go:177] * [multinode-595051] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:44:02.994502   97564 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:44:02.994475   97564 notify.go:220] Checking for updates...
	I1127 23:44:02.996027   97564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:44:02.997411   97564 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:44:02.998959   97564 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:44:03.000327   97564 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:44:03.001644   97564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:44:03.003320   97564 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:44:03.029987   97564 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:44:03.030122   97564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:44:03.085159   97564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-27 23:44:03.07550643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:44:03.085382   97564 docker.go:295] overlay module found
	I1127 23:44:03.087675   97564 out.go:177] * Using the docker driver based on user configuration
	I1127 23:44:03.089477   97564 start.go:298] selected driver: docker
	I1127 23:44:03.089492   97564 start.go:902] validating driver "docker" against <nil>
	I1127 23:44:03.089504   97564 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:44:03.090256   97564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:44:03.142473   97564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-27 23:44:03.13431778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:44:03.142633   97564 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:44:03.142873   97564 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:44:03.144739   97564 out.go:177] * Using Docker driver with root privileges
	I1127 23:44:03.146251   97564 cni.go:84] Creating CNI manager for ""
	I1127 23:44:03.146272   97564 cni.go:136] 0 nodes found, recommending kindnet
	I1127 23:44:03.146282   97564 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:44:03.146292   97564 start_flags.go:323] config:
	{Name:multinode-595051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-595051 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:44:03.147819   97564 out.go:177] * Starting control plane node multinode-595051 in cluster multinode-595051
	I1127 23:44:03.149173   97564 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:44:03.150620   97564 out.go:177] * Pulling base image ...
	I1127 23:44:03.151838   97564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:44:03.151883   97564 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:44:03.151886   97564 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:44:03.151994   97564 cache.go:56] Caching tarball of preloaded images
	I1127 23:44:03.152095   97564 preload.go:174] Found /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 23:44:03.152115   97564 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:44:03.152432   97564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/config.json ...
	I1127 23:44:03.152455   97564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/config.json: {Name:mk0937a31039c337cce7c6a34370fddbee5aa75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:03.168360   97564 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:44:03.168383   97564 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 23:44:03.168396   97564 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:44:03.168424   97564 start.go:365] acquiring machines lock for multinode-595051: {Name:mk4d50dea28e9648857e6619927d5502dfa2d398 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:44:03.168510   97564 start.go:369] acquired machines lock for "multinode-595051" in 71.087µs
	I1127 23:44:03.168534   97564 start.go:93] Provisioning new machine with config: &{Name:multinode-595051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-595051 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:44:03.168609   97564 start.go:125] createHost starting for "" (driver="docker")
	I1127 23:44:03.170659   97564 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1127 23:44:03.170882   97564 start.go:159] libmachine.API.Create for "multinode-595051" (driver="docker")
	I1127 23:44:03.170912   97564 client.go:168] LocalClient.Create starting
	I1127 23:44:03.170997   97564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem
	I1127 23:44:03.171031   97564 main.go:141] libmachine: Decoding PEM data...
	I1127 23:44:03.171046   97564 main.go:141] libmachine: Parsing certificate...
	I1127 23:44:03.171102   97564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem
	I1127 23:44:03.171120   97564 main.go:141] libmachine: Decoding PEM data...
	I1127 23:44:03.171128   97564 main.go:141] libmachine: Parsing certificate...
	I1127 23:44:03.171440   97564 cli_runner.go:164] Run: docker network inspect multinode-595051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 23:44:03.187403   97564 cli_runner.go:211] docker network inspect multinode-595051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 23:44:03.187494   97564 network_create.go:281] running [docker network inspect multinode-595051] to gather additional debugging logs...
	I1127 23:44:03.187516   97564 cli_runner.go:164] Run: docker network inspect multinode-595051
	W1127 23:44:03.204333   97564 cli_runner.go:211] docker network inspect multinode-595051 returned with exit code 1
	I1127 23:44:03.204367   97564 network_create.go:284] error running [docker network inspect multinode-595051]: docker network inspect multinode-595051: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-595051 not found
	I1127 23:44:03.204380   97564 network_create.go:286] output of [docker network inspect multinode-595051]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-595051 not found
	
	** /stderr **
	I1127 23:44:03.204488   97564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:44:03.222316   97564 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-aa3d266d0d61 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f5:0f:30:a9} reservation:<nil>}
	I1127 23:44:03.222850   97564 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015520}
	I1127 23:44:03.222878   97564 network_create.go:124] attempt to create docker network multinode-595051 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1127 23:44:03.222941   97564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-595051 multinode-595051
	I1127 23:44:03.276954   97564 network_create.go:108] docker network multinode-595051 192.168.58.0/24 created
	I1127 23:44:03.276987   97564 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-595051" container
	I1127 23:44:03.277052   97564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:44:03.292506   97564 cli_runner.go:164] Run: docker volume create multinode-595051 --label name.minikube.sigs.k8s.io=multinode-595051 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:44:03.309574   97564 oci.go:103] Successfully created a docker volume multinode-595051
	I1127 23:44:03.309653   97564 cli_runner.go:164] Run: docker run --rm --name multinode-595051-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-595051 --entrypoint /usr/bin/test -v multinode-595051:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:44:03.808563   97564 oci.go:107] Successfully prepared a docker volume multinode-595051
	I1127 23:44:03.808601   97564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:44:03.808621   97564 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:44:03.808707   97564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-595051:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:44:09.149905   97564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-595051:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.341138565s)
	I1127 23:44:09.149943   97564 kic.go:203] duration metric: took 5.341320 seconds to extract preloaded images to volume
	W1127 23:44:09.150100   97564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:44:09.150222   97564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:44:09.202502   97564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-595051 --name multinode-595051 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-595051 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-595051 --network multinode-595051 --ip 192.168.58.2 --volume multinode-595051:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:44:09.500035   97564 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Running}}
	I1127 23:44:09.517556   97564 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Status}}
	I1127 23:44:09.535454   97564 cli_runner.go:164] Run: docker exec multinode-595051 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:44:09.602929   97564 oci.go:144] the created container "multinode-595051" has a running status.
	I1127 23:44:09.602967   97564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa...
	I1127 23:44:09.680955   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 23:44:09.681005   97564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:44:09.701738   97564 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Status}}
	I1127 23:44:09.719900   97564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:44:09.719919   97564 kic_runner.go:114] Args: [docker exec --privileged multinode-595051 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:44:09.798300   97564 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Status}}
	I1127 23:44:09.816650   97564 machine.go:88] provisioning docker machine ...
	I1127 23:44:09.816690   97564 ubuntu.go:169] provisioning hostname "multinode-595051"
	I1127 23:44:09.816762   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:09.840217   97564 main.go:141] libmachine: Using SSH client type: native
	I1127 23:44:09.840591   97564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1127 23:44:09.840609   97564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-595051 && echo "multinode-595051" | sudo tee /etc/hostname
	I1127 23:44:09.841206   97564 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44708->127.0.0.1:32847: read: connection reset by peer
	I1127 23:44:12.976401   97564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-595051
	
	I1127 23:44:12.976481   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:12.992627   97564 main.go:141] libmachine: Using SSH client type: native
	I1127 23:44:12.993057   97564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1127 23:44:12.993085   97564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-595051' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-595051/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-595051' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:44:13.114074   97564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:44:13.114103   97564 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4554/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4554/.minikube}
	I1127 23:44:13.114138   97564 ubuntu.go:177] setting up certificates
	I1127 23:44:13.114151   97564 provision.go:83] configureAuth start
	I1127 23:44:13.114212   97564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-595051
	I1127 23:44:13.129958   97564 provision.go:138] copyHostCerts
	I1127 23:44:13.129995   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem
	I1127 23:44:13.130029   97564 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem, removing ...
	I1127 23:44:13.130044   97564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem
	I1127 23:44:13.130143   97564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem (1078 bytes)
	I1127 23:44:13.130236   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem
	I1127 23:44:13.130258   97564 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem, removing ...
	I1127 23:44:13.130269   97564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem
	I1127 23:44:13.130308   97564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem (1123 bytes)
	I1127 23:44:13.130363   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem
	I1127 23:44:13.130387   97564 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem, removing ...
	I1127 23:44:13.130396   97564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem
	I1127 23:44:13.130521   97564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem (1679 bytes)
	I1127 23:44:13.130637   97564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem org=jenkins.multinode-595051 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-595051]
	I1127 23:44:13.281249   97564 provision.go:172] copyRemoteCerts
	I1127 23:44:13.281311   97564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:44:13.281382   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:13.298554   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:44:13.387136   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:44:13.387200   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:44:13.408487   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:44:13.408554   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:44:13.430322   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:44:13.430402   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1127 23:44:13.451723   97564 provision.go:86] duration metric: configureAuth took 337.540001ms
	I1127 23:44:13.451761   97564 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:44:13.451938   97564 config.go:182] Loaded profile config "multinode-595051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:44:13.452035   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:13.468967   97564 main.go:141] libmachine: Using SSH client type: native
	I1127 23:44:13.469282   97564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1127 23:44:13.469298   97564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:44:13.675973   97564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:44:13.676005   97564 machine.go:91] provisioned docker machine in 3.85932865s
	I1127 23:44:13.676016   97564 client.go:171] LocalClient.Create took 10.505099127s
	I1127 23:44:13.676044   97564 start.go:167] duration metric: libmachine.API.Create for "multinode-595051" took 10.505156749s
	I1127 23:44:13.676054   97564 start.go:300] post-start starting for "multinode-595051" (driver="docker")
	I1127 23:44:13.676070   97564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:44:13.676136   97564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:44:13.676191   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:13.693092   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:44:13.787413   97564 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:44:13.790329   97564 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1127 23:44:13.790386   97564 command_runner.go:130] > NAME="Ubuntu"
	I1127 23:44:13.790400   97564 command_runner.go:130] > VERSION_ID="22.04"
	I1127 23:44:13.790409   97564 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1127 23:44:13.790417   97564 command_runner.go:130] > VERSION_CODENAME=jammy
	I1127 23:44:13.790424   97564 command_runner.go:130] > ID=ubuntu
	I1127 23:44:13.790439   97564 command_runner.go:130] > ID_LIKE=debian
	I1127 23:44:13.790445   97564 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1127 23:44:13.790450   97564 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1127 23:44:13.790459   97564 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1127 23:44:13.790468   97564 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1127 23:44:13.790474   97564 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1127 23:44:13.790521   97564 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:44:13.790547   97564 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:44:13.790557   97564 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:44:13.790565   97564 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:44:13.790576   97564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/addons for local assets ...
	I1127 23:44:13.790621   97564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/files for local assets ...
	I1127 23:44:13.790689   97564 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> 113062.pem in /etc/ssl/certs
	I1127 23:44:13.790697   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> /etc/ssl/certs/113062.pem
	I1127 23:44:13.790774   97564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:44:13.798433   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem --> /etc/ssl/certs/113062.pem (1708 bytes)
	I1127 23:44:13.819842   97564 start.go:303] post-start completed in 143.770233ms
	I1127 23:44:13.820207   97564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-595051
	I1127 23:44:13.836675   97564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/config.json ...
	I1127 23:44:13.836935   97564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:44:13.836984   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:13.853052   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:44:13.938985   97564 command_runner.go:130] > 20%!
	(MISSING)I1127 23:44:13.939066   97564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:44:13.942833   97564 command_runner.go:130] > 234G
	I1127 23:44:13.943032   97564 start.go:128] duration metric: createHost completed in 10.774403278s
	I1127 23:44:13.943052   97564 start.go:83] releasing machines lock for "multinode-595051", held for 10.774530889s
	I1127 23:44:13.943107   97564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-595051
	I1127 23:44:13.959277   97564 ssh_runner.go:195] Run: cat /version.json
	I1127 23:44:13.959340   97564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:44:13.959365   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:13.959413   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:13.975378   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:44:13.977013   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:44:14.065370   97564 command_runner.go:130] > {"iso_version": "v1.32.1-1699648094-17581", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "6532cab52e164d1138ecb8469e77a57a00b45825"}
	I1127 23:44:14.151154   97564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1127 23:44:14.153445   97564 ssh_runner.go:195] Run: systemctl --version
	I1127 23:44:14.157756   97564 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1127 23:44:14.157789   97564 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1127 23:44:14.157855   97564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:44:14.293983   97564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:44:14.297966   97564 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1127 23:44:14.298019   97564 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1127 23:44:14.298030   97564 command_runner.go:130] > Device: 37h/55d	Inode: 541438      Links: 1
	I1127 23:44:14.298042   97564 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:44:14.298071   97564 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1127 23:44:14.298080   97564 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1127 23:44:14.298089   97564 command_runner.go:130] > Change: 2023-11-27 23:25:11.088260507 +0000
	I1127 23:44:14.298103   97564 command_runner.go:130] >  Birth: 2023-11-27 23:25:11.088260507 +0000
	I1127 23:44:14.298300   97564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:44:14.315900   97564 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:44:14.315981   97564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:44:14.341964   97564 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1127 23:44:14.342013   97564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:44:14.342021   97564 start.go:472] detecting cgroup driver to use...
	I1127 23:44:14.342072   97564 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:44:14.342125   97564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:44:14.355888   97564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:44:14.365972   97564 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:44:14.366031   97564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:44:14.377835   97564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:44:14.390677   97564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:44:14.467575   97564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:44:14.547818   97564 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1127 23:44:14.547844   97564 docker.go:219] disabling docker service ...
	I1127 23:44:14.547893   97564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:44:14.564630   97564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:44:14.574606   97564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:44:14.652312   97564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1127 23:44:14.652396   97564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:44:14.735579   97564 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1127 23:44:14.735667   97564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:44:14.745811   97564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:44:14.759390   97564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1127 23:44:14.760158   97564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:44:14.760207   97564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:44:14.769103   97564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:44:14.769156   97564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:44:14.777789   97564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:44:14.786222   97564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:44:14.794442   97564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:44:14.802229   97564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:44:14.810006   97564 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1127 23:44:14.810091   97564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:44:14.817394   97564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:44:14.898373   97564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:44:14.999282   97564 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:44:14.999359   97564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:44:15.002608   97564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1127 23:44:15.002641   97564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1127 23:44:15.002650   97564 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1127 23:44:15.002657   97564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:44:15.002663   97564 command_runner.go:130] > Access: 2023-11-27 23:44:14.988832853 +0000
	I1127 23:44:15.002668   97564 command_runner.go:130] > Modify: 2023-11-27 23:44:14.988832853 +0000
	I1127 23:44:15.002675   97564 command_runner.go:130] > Change: 2023-11-27 23:44:14.988832853 +0000
	I1127 23:44:15.002680   97564 command_runner.go:130] >  Birth: -
	I1127 23:44:15.002696   97564 start.go:540] Will wait 60s for crictl version
	I1127 23:44:15.002741   97564 ssh_runner.go:195] Run: which crictl
	I1127 23:44:15.005676   97564 command_runner.go:130] > /usr/bin/crictl
	I1127 23:44:15.005775   97564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:44:15.035937   97564 command_runner.go:130] > Version:  0.1.0
	I1127 23:44:15.035962   97564 command_runner.go:130] > RuntimeName:  cri-o
	I1127 23:44:15.035970   97564 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1127 23:44:15.035979   97564 command_runner.go:130] > RuntimeApiVersion:  v1
	I1127 23:44:15.037626   97564 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 23:44:15.037713   97564 ssh_runner.go:195] Run: crio --version
	I1127 23:44:15.069121   97564 command_runner.go:130] > crio version 1.24.6
	I1127 23:44:15.069150   97564 command_runner.go:130] > Version:          1.24.6
	I1127 23:44:15.069157   97564 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 23:44:15.069161   97564 command_runner.go:130] > GitTreeState:     clean
	I1127 23:44:15.069167   97564 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 23:44:15.069172   97564 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 23:44:15.069175   97564 command_runner.go:130] > Compiler:         gc
	I1127 23:44:15.069191   97564 command_runner.go:130] > Platform:         linux/amd64
	I1127 23:44:15.069201   97564 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:44:15.069217   97564 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:44:15.069229   97564 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:44:15.069236   97564 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:44:15.070495   97564 ssh_runner.go:195] Run: crio --version
	I1127 23:44:15.101757   97564 command_runner.go:130] > crio version 1.24.6
	I1127 23:44:15.101776   97564 command_runner.go:130] > Version:          1.24.6
	I1127 23:44:15.101783   97564 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 23:44:15.101787   97564 command_runner.go:130] > GitTreeState:     clean
	I1127 23:44:15.101794   97564 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 23:44:15.101799   97564 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 23:44:15.101803   97564 command_runner.go:130] > Compiler:         gc
	I1127 23:44:15.101808   97564 command_runner.go:130] > Platform:         linux/amd64
	I1127 23:44:15.101816   97564 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:44:15.101827   97564 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:44:15.101838   97564 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:44:15.101848   97564 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:44:15.106358   97564 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 23:44:15.108006   97564 cli_runner.go:164] Run: docker network inspect multinode-595051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:44:15.123851   97564 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1127 23:44:15.127294   97564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:44:15.137766   97564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:44:15.137817   97564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:44:15.191461   97564 command_runner.go:130] > {
	I1127 23:44:15.191483   97564 command_runner.go:130] >   "images": [
	I1127 23:44:15.191488   97564 command_runner.go:130] >     {
	I1127 23:44:15.191500   97564 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1127 23:44:15.191508   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.191517   97564 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1127 23:44:15.191522   97564 command_runner.go:130] >       ],
	I1127 23:44:15.191528   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.191540   97564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1127 23:44:15.191553   97564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1127 23:44:15.191561   97564 command_runner.go:130] >       ],
	I1127 23:44:15.191571   97564 command_runner.go:130] >       "size": "65258016",
	I1127 23:44:15.191575   97564 command_runner.go:130] >       "uid": null,
	I1127 23:44:15.191582   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.191596   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.191605   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.191616   97564 command_runner.go:130] >     },
	I1127 23:44:15.191625   97564 command_runner.go:130] >     {
	I1127 23:44:15.191635   97564 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1127 23:44:15.191642   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.191647   97564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1127 23:44:15.191652   97564 command_runner.go:130] >       ],
	I1127 23:44:15.191656   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.191666   97564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1127 23:44:15.191675   97564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1127 23:44:15.191681   97564 command_runner.go:130] >       ],
	I1127 23:44:15.191692   97564 command_runner.go:130] >       "size": "31470524",
	I1127 23:44:15.191702   97564 command_runner.go:130] >       "uid": null,
	I1127 23:44:15.191709   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.191719   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.191726   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.191735   97564 command_runner.go:130] >     },
	I1127 23:44:15.191739   97564 command_runner.go:130] >     {
	I1127 23:44:15.191745   97564 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1127 23:44:15.191754   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.191762   97564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1127 23:44:15.191766   97564 command_runner.go:130] >       ],
	I1127 23:44:15.191772   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.191784   97564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1127 23:44:15.191800   97564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1127 23:44:15.191810   97564 command_runner.go:130] >       ],
	I1127 23:44:15.191819   97564 command_runner.go:130] >       "size": "53621675",
	I1127 23:44:15.191830   97564 command_runner.go:130] >       "uid": null,
	I1127 23:44:15.191840   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.191847   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.191852   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.191858   97564 command_runner.go:130] >     },
	I1127 23:44:15.191862   97564 command_runner.go:130] >     {
	I1127 23:44:15.191868   97564 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1127 23:44:15.191877   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.191889   97564 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1127 23:44:15.191896   97564 command_runner.go:130] >       ],
	I1127 23:44:15.191910   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.191925   97564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1127 23:44:15.191939   97564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1127 23:44:15.191964   97564 command_runner.go:130] >       ],
	I1127 23:44:15.191977   97564 command_runner.go:130] >       "size": "295456551",
	I1127 23:44:15.191984   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.191991   97564 command_runner.go:130] >         "value": "0"
	I1127 23:44:15.192001   97564 command_runner.go:130] >       },
	I1127 23:44:15.192010   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.192020   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.192031   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.192039   97564 command_runner.go:130] >     },
	I1127 23:44:15.192045   97564 command_runner.go:130] >     {
	I1127 23:44:15.192056   97564 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1127 23:44:15.192063   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.192071   97564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1127 23:44:15.192081   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192089   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.192111   97564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1127 23:44:15.192126   97564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1127 23:44:15.192135   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192141   97564 command_runner.go:130] >       "size": "127226832",
	I1127 23:44:15.192232   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.192260   97564 command_runner.go:130] >         "value": "0"
	I1127 23:44:15.192269   97564 command_runner.go:130] >       },
	I1127 23:44:15.192275   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.192282   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.192289   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.192298   97564 command_runner.go:130] >     },
	I1127 23:44:15.192308   97564 command_runner.go:130] >     {
	I1127 23:44:15.192319   97564 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1127 23:44:15.192330   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.192343   97564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1127 23:44:15.192352   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192359   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.192374   97564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1127 23:44:15.192398   97564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1127 23:44:15.192408   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192416   97564 command_runner.go:130] >       "size": "123261750",
	I1127 23:44:15.192426   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.192435   97564 command_runner.go:130] >         "value": "0"
	I1127 23:44:15.192444   97564 command_runner.go:130] >       },
	I1127 23:44:15.192451   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.192461   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.192468   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.192476   97564 command_runner.go:130] >     },
	I1127 23:44:15.192480   97564 command_runner.go:130] >     {
	I1127 23:44:15.192488   97564 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1127 23:44:15.192499   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.192511   97564 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1127 23:44:15.192520   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192527   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.192543   97564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1127 23:44:15.192557   97564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1127 23:44:15.192569   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192579   97564 command_runner.go:130] >       "size": "74749335",
	I1127 23:44:15.192589   97564 command_runner.go:130] >       "uid": null,
	I1127 23:44:15.192602   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.192612   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.192621   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.192630   97564 command_runner.go:130] >     },
	I1127 23:44:15.192639   97564 command_runner.go:130] >     {
	I1127 23:44:15.192649   97564 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1127 23:44:15.192658   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.192663   97564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1127 23:44:15.192669   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192676   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.192749   97564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1127 23:44:15.192761   97564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1127 23:44:15.192767   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192774   97564 command_runner.go:130] >       "size": "61551410",
	I1127 23:44:15.192784   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.192796   97564 command_runner.go:130] >         "value": "0"
	I1127 23:44:15.192805   97564 command_runner.go:130] >       },
	I1127 23:44:15.192811   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.192820   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.192826   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.192834   97564 command_runner.go:130] >     },
	I1127 23:44:15.192839   97564 command_runner.go:130] >     {
	I1127 23:44:15.192852   97564 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1127 23:44:15.192858   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.192869   97564 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1127 23:44:15.192879   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192886   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.192900   97564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1127 23:44:15.192915   97564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1127 23:44:15.192924   97564 command_runner.go:130] >       ],
	I1127 23:44:15.192932   97564 command_runner.go:130] >       "size": "750414",
	I1127 23:44:15.192941   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.192949   97564 command_runner.go:130] >         "value": "65535"
	I1127 23:44:15.192957   97564 command_runner.go:130] >       },
	I1127 23:44:15.192967   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.192987   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.192994   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.193004   97564 command_runner.go:130] >     }
	I1127 23:44:15.193010   97564 command_runner.go:130] >   ]
	I1127 23:44:15.193019   97564 command_runner.go:130] > }
	I1127 23:44:15.194284   97564 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:44:15.194305   97564 crio.go:415] Images already preloaded, skipping extraction
	I1127 23:44:15.194359   97564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:44:15.223469   97564 command_runner.go:130] > {
	I1127 23:44:15.223493   97564 command_runner.go:130] >   "images": [
	I1127 23:44:15.223499   97564 command_runner.go:130] >     {
	I1127 23:44:15.223518   97564 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1127 23:44:15.223526   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.223540   97564 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1127 23:44:15.223546   97564 command_runner.go:130] >       ],
	I1127 23:44:15.223554   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.223570   97564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1127 23:44:15.223586   97564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1127 23:44:15.223593   97564 command_runner.go:130] >       ],
	I1127 23:44:15.223604   97564 command_runner.go:130] >       "size": "65258016",
	I1127 23:44:15.223619   97564 command_runner.go:130] >       "uid": null,
	I1127 23:44:15.223630   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.223647   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.223656   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.223663   97564 command_runner.go:130] >     },
	I1127 23:44:15.223673   97564 command_runner.go:130] >     {
	I1127 23:44:15.223681   97564 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1127 23:44:15.223687   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.223695   97564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1127 23:44:15.223701   97564 command_runner.go:130] >       ],
	I1127 23:44:15.223707   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.223720   97564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1127 23:44:15.223732   97564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1127 23:44:15.223740   97564 command_runner.go:130] >       ],
	I1127 23:44:15.223754   97564 command_runner.go:130] >       "size": "31470524",
	I1127 23:44:15.223762   97564 command_runner.go:130] >       "uid": null,
	I1127 23:44:15.223773   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.223783   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.223796   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.223806   97564 command_runner.go:130] >     },
	I1127 23:44:15.223814   97564 command_runner.go:130] >     {
	I1127 23:44:15.223826   97564 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1127 23:44:15.223836   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.223849   97564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1127 23:44:15.223859   97564 command_runner.go:130] >       ],
	I1127 23:44:15.223869   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.223882   97564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1127 23:44:15.223898   97564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1127 23:44:15.223908   97564 command_runner.go:130] >       ],
	I1127 23:44:15.223918   97564 command_runner.go:130] >       "size": "53621675",
	I1127 23:44:15.223928   97564 command_runner.go:130] >       "uid": null,
	I1127 23:44:15.223946   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.223955   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.223961   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.223966   97564 command_runner.go:130] >     },
	I1127 23:44:15.223971   97564 command_runner.go:130] >     {
	I1127 23:44:15.223985   97564 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1127 23:44:15.223997   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.224009   97564 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1127 23:44:15.224019   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224029   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.224044   97564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1127 23:44:15.224060   97564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1127 23:44:15.224078   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224090   97564 command_runner.go:130] >       "size": "295456551",
	I1127 23:44:15.224098   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.224108   97564 command_runner.go:130] >         "value": "0"
	I1127 23:44:15.224118   97564 command_runner.go:130] >       },
	I1127 23:44:15.224129   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.224137   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.224148   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.224156   97564 command_runner.go:130] >     },
	I1127 23:44:15.224163   97564 command_runner.go:130] >     {
	I1127 23:44:15.224177   97564 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1127 23:44:15.224191   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.224204   97564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1127 23:44:15.224214   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224224   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.224240   97564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1127 23:44:15.224256   97564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1127 23:44:15.224271   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224282   97564 command_runner.go:130] >       "size": "127226832",
	I1127 23:44:15.224290   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.224300   97564 command_runner.go:130] >         "value": "0"
	I1127 23:44:15.224308   97564 command_runner.go:130] >       },
	I1127 23:44:15.224319   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.224330   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.224340   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.224347   97564 command_runner.go:130] >     },
	I1127 23:44:15.224355   97564 command_runner.go:130] >     {
	I1127 23:44:15.224367   97564 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1127 23:44:15.224379   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.224391   97564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1127 23:44:15.224397   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224403   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.224411   97564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1127 23:44:15.224419   97564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1127 23:44:15.224422   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224427   97564 command_runner.go:130] >       "size": "123261750",
	I1127 23:44:15.224432   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.224436   97564 command_runner.go:130] >         "value": "0"
	I1127 23:44:15.224439   97564 command_runner.go:130] >       },
	I1127 23:44:15.224444   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.224463   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.224470   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.224473   97564 command_runner.go:130] >     },
	I1127 23:44:15.224477   97564 command_runner.go:130] >     {
	I1127 23:44:15.224483   97564 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1127 23:44:15.224489   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.224494   97564 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1127 23:44:15.224503   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224508   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.224515   97564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1127 23:44:15.224525   97564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1127 23:44:15.224528   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224533   97564 command_runner.go:130] >       "size": "74749335",
	I1127 23:44:15.224538   97564 command_runner.go:130] >       "uid": null,
	I1127 23:44:15.224543   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.224552   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.224556   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.224561   97564 command_runner.go:130] >     },
	I1127 23:44:15.224565   97564 command_runner.go:130] >     {
	I1127 23:44:15.224575   97564 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1127 23:44:15.224582   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.224587   97564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1127 23:44:15.224592   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224596   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.224647   97564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1127 23:44:15.224660   97564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1127 23:44:15.224664   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224668   97564 command_runner.go:130] >       "size": "61551410",
	I1127 23:44:15.224672   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.224675   97564 command_runner.go:130] >         "value": "0"
	I1127 23:44:15.224679   97564 command_runner.go:130] >       },
	I1127 23:44:15.224683   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.224688   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.224694   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.224698   97564 command_runner.go:130] >     },
	I1127 23:44:15.224701   97564 command_runner.go:130] >     {
	I1127 23:44:15.224708   97564 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1127 23:44:15.224717   97564 command_runner.go:130] >       "repoTags": [
	I1127 23:44:15.224725   97564 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1127 23:44:15.224733   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224740   97564 command_runner.go:130] >       "repoDigests": [
	I1127 23:44:15.224751   97564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1127 23:44:15.224757   97564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1127 23:44:15.224767   97564 command_runner.go:130] >       ],
	I1127 23:44:15.224771   97564 command_runner.go:130] >       "size": "750414",
	I1127 23:44:15.224781   97564 command_runner.go:130] >       "uid": {
	I1127 23:44:15.224787   97564 command_runner.go:130] >         "value": "65535"
	I1127 23:44:15.224791   97564 command_runner.go:130] >       },
	I1127 23:44:15.224795   97564 command_runner.go:130] >       "username": "",
	I1127 23:44:15.224799   97564 command_runner.go:130] >       "spec": null,
	I1127 23:44:15.224804   97564 command_runner.go:130] >       "pinned": false
	I1127 23:44:15.224809   97564 command_runner.go:130] >     }
	I1127 23:44:15.224812   97564 command_runner.go:130] >   ]
	I1127 23:44:15.224816   97564 command_runner.go:130] > }
	I1127 23:44:15.225496   97564 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:44:15.225520   97564 cache_images.go:84] Images are preloaded, skipping loading
	I1127 23:44:15.225584   97564 ssh_runner.go:195] Run: crio config
	I1127 23:44:15.260821   97564 command_runner.go:130] ! time="2023-11-27 23:44:15.260356341Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1127 23:44:15.260858   97564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1127 23:44:15.266237   97564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1127 23:44:15.266258   97564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1127 23:44:15.266265   97564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1127 23:44:15.266276   97564 command_runner.go:130] > #
	I1127 23:44:15.266283   97564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1127 23:44:15.266289   97564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1127 23:44:15.266295   97564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1127 23:44:15.266302   97564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1127 23:44:15.266306   97564 command_runner.go:130] > # reload'.
	I1127 23:44:15.266312   97564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1127 23:44:15.266318   97564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1127 23:44:15.266329   97564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1127 23:44:15.266337   97564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1127 23:44:15.266344   97564 command_runner.go:130] > [crio]
	I1127 23:44:15.266350   97564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1127 23:44:15.266357   97564 command_runner.go:130] > # containers images, in this directory.
	I1127 23:44:15.266366   97564 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1127 23:44:15.266374   97564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1127 23:44:15.266381   97564 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1127 23:44:15.266387   97564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1127 23:44:15.266396   97564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1127 23:44:15.266400   97564 command_runner.go:130] > # storage_driver = "vfs"
	I1127 23:44:15.266408   97564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1127 23:44:15.266416   97564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1127 23:44:15.266423   97564 command_runner.go:130] > # storage_option = [
	I1127 23:44:15.266426   97564 command_runner.go:130] > # ]
	I1127 23:44:15.266441   97564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1127 23:44:15.266449   97564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1127 23:44:15.266456   97564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1127 23:44:15.266464   97564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1127 23:44:15.266472   97564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1127 23:44:15.266478   97564 command_runner.go:130] > # always happen on a node reboot
	I1127 23:44:15.266483   97564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1127 23:44:15.266491   97564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1127 23:44:15.266497   97564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1127 23:44:15.266510   97564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1127 23:44:15.266517   97564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1127 23:44:15.266525   97564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1127 23:44:15.266534   97564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1127 23:44:15.266541   97564 command_runner.go:130] > # internal_wipe = true
	I1127 23:44:15.266558   97564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1127 23:44:15.266566   97564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1127 23:44:15.266571   97564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1127 23:44:15.266579   97564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1127 23:44:15.266587   97564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1127 23:44:15.266593   97564 command_runner.go:130] > [crio.api]
	I1127 23:44:15.266598   97564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1127 23:44:15.266608   97564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1127 23:44:15.266616   97564 command_runner.go:130] > # IP address on which the stream server will listen.
	I1127 23:44:15.266622   97564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1127 23:44:15.266629   97564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1127 23:44:15.266636   97564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1127 23:44:15.266640   97564 command_runner.go:130] > # stream_port = "0"
	I1127 23:44:15.266647   97564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1127 23:44:15.266652   97564 command_runner.go:130] > # stream_enable_tls = false
	I1127 23:44:15.266659   97564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1127 23:44:15.266665   97564 command_runner.go:130] > # stream_idle_timeout = ""
	I1127 23:44:15.266671   97564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1127 23:44:15.266679   97564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1127 23:44:15.266684   97564 command_runner.go:130] > # minutes.
	I1127 23:44:15.266689   97564 command_runner.go:130] > # stream_tls_cert = ""
	I1127 23:44:15.266697   97564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1127 23:44:15.266705   97564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1127 23:44:15.266711   97564 command_runner.go:130] > # stream_tls_key = ""
	I1127 23:44:15.266717   97564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1127 23:44:15.266727   97564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1127 23:44:15.266735   97564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1127 23:44:15.266739   97564 command_runner.go:130] > # stream_tls_ca = ""
	I1127 23:44:15.266748   97564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:44:15.266755   97564 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1127 23:44:15.266762   97564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:44:15.266768   97564 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1127 23:44:15.266792   97564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1127 23:44:15.266800   97564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1127 23:44:15.266807   97564 command_runner.go:130] > [crio.runtime]
	I1127 23:44:15.266813   97564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1127 23:44:15.266820   97564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1127 23:44:15.266824   97564 command_runner.go:130] > # "nofile=1024:2048"
	I1127 23:44:15.266835   97564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1127 23:44:15.266842   97564 command_runner.go:130] > # default_ulimits = [
	I1127 23:44:15.266845   97564 command_runner.go:130] > # ]
	I1127 23:44:15.266853   97564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1127 23:44:15.266857   97564 command_runner.go:130] > # no_pivot = false
	I1127 23:44:15.266865   97564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1127 23:44:15.266874   97564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1127 23:44:15.266880   97564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1127 23:44:15.266886   97564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1127 23:44:15.266893   97564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1127 23:44:15.266900   97564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:44:15.266906   97564 command_runner.go:130] > # conmon = ""
	I1127 23:44:15.266910   97564 command_runner.go:130] > # Cgroup setting for conmon
	I1127 23:44:15.266917   97564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1127 23:44:15.266923   97564 command_runner.go:130] > conmon_cgroup = "pod"
	I1127 23:44:15.266929   97564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1127 23:44:15.266936   97564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1127 23:44:15.266943   97564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:44:15.266949   97564 command_runner.go:130] > # conmon_env = [
	I1127 23:44:15.266952   97564 command_runner.go:130] > # ]
	I1127 23:44:15.266959   97564 command_runner.go:130] > # Additional environment variables to set for all the
	I1127 23:44:15.266964   97564 command_runner.go:130] > # containers. These are overridden if set in the
	I1127 23:44:15.266971   97564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1127 23:44:15.266978   97564 command_runner.go:130] > # default_env = [
	I1127 23:44:15.266986   97564 command_runner.go:130] > # ]
	I1127 23:44:15.266995   97564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1127 23:44:15.266999   97564 command_runner.go:130] > # selinux = false
	I1127 23:44:15.267007   97564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1127 23:44:15.267015   97564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1127 23:44:15.267023   97564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1127 23:44:15.267029   97564 command_runner.go:130] > # seccomp_profile = ""
	I1127 23:44:15.267035   97564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1127 23:44:15.267043   97564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1127 23:44:15.267051   97564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1127 23:44:15.267056   97564 command_runner.go:130] > # which might increase security.
	I1127 23:44:15.267061   97564 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1127 23:44:15.267069   97564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1127 23:44:15.267077   97564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1127 23:44:15.267085   97564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1127 23:44:15.267096   97564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1127 23:44:15.267106   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:15.267116   97564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1127 23:44:15.267124   97564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1127 23:44:15.267130   97564 command_runner.go:130] > # the cgroup blockio controller.
	I1127 23:44:15.267135   97564 command_runner.go:130] > # blockio_config_file = ""
	I1127 23:44:15.267143   97564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1127 23:44:15.267150   97564 command_runner.go:130] > # irqbalance daemon.
	I1127 23:44:15.267155   97564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1127 23:44:15.267164   97564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1127 23:44:15.267174   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:15.267180   97564 command_runner.go:130] > # rdt_config_file = ""
	I1127 23:44:15.267186   97564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1127 23:44:15.267192   97564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1127 23:44:15.267198   97564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1127 23:44:15.267204   97564 command_runner.go:130] > # separate_pull_cgroup = ""
	I1127 23:44:15.267211   97564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1127 23:44:15.267219   97564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1127 23:44:15.267225   97564 command_runner.go:130] > # will be added.
	I1127 23:44:15.267229   97564 command_runner.go:130] > # default_capabilities = [
	I1127 23:44:15.267235   97564 command_runner.go:130] > # 	"CHOWN",
	I1127 23:44:15.267241   97564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1127 23:44:15.267245   97564 command_runner.go:130] > # 	"FSETID",
	I1127 23:44:15.267251   97564 command_runner.go:130] > # 	"FOWNER",
	I1127 23:44:15.267255   97564 command_runner.go:130] > # 	"SETGID",
	I1127 23:44:15.267261   97564 command_runner.go:130] > # 	"SETUID",
	I1127 23:44:15.267264   97564 command_runner.go:130] > # 	"SETPCAP",
	I1127 23:44:15.267268   97564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1127 23:44:15.267274   97564 command_runner.go:130] > # 	"KILL",
	I1127 23:44:15.267277   97564 command_runner.go:130] > # ]
	I1127 23:44:15.267287   97564 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1127 23:44:15.267299   97564 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1127 23:44:15.267306   97564 command_runner.go:130] > # add_inheritable_capabilities = true
	I1127 23:44:15.267312   97564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1127 23:44:15.267320   97564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:44:15.267324   97564 command_runner.go:130] > # default_sysctls = [
	I1127 23:44:15.267330   97564 command_runner.go:130] > # ]
	I1127 23:44:15.267337   97564 command_runner.go:130] > # List of devices on the host that a
	I1127 23:44:15.267348   97564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1127 23:44:15.267355   97564 command_runner.go:130] > # allowed_devices = [
	I1127 23:44:15.267359   97564 command_runner.go:130] > # 	"/dev/fuse",
	I1127 23:44:15.267365   97564 command_runner.go:130] > # ]
	I1127 23:44:15.267371   97564 command_runner.go:130] > # List of additional devices. specified as
	I1127 23:44:15.267405   97564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1127 23:44:15.267413   97564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1127 23:44:15.267418   97564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:44:15.267425   97564 command_runner.go:130] > # additional_devices = [
	I1127 23:44:15.267428   97564 command_runner.go:130] > # ]
	I1127 23:44:15.267435   97564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1127 23:44:15.267439   97564 command_runner.go:130] > # cdi_spec_dirs = [
	I1127 23:44:15.267444   97564 command_runner.go:130] > # 	"/etc/cdi",
	I1127 23:44:15.267448   97564 command_runner.go:130] > # 	"/var/run/cdi",
	I1127 23:44:15.267453   97564 command_runner.go:130] > # ]
	I1127 23:44:15.267460   97564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1127 23:44:15.267468   97564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1127 23:44:15.267474   97564 command_runner.go:130] > # Defaults to false.
	I1127 23:44:15.267481   97564 command_runner.go:130] > # device_ownership_from_security_context = false
	I1127 23:44:15.267489   97564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1127 23:44:15.267497   97564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1127 23:44:15.267501   97564 command_runner.go:130] > # hooks_dir = [
	I1127 23:44:15.267508   97564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1127 23:44:15.267511   97564 command_runner.go:130] > # ]
	I1127 23:44:15.267519   97564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1127 23:44:15.267528   97564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1127 23:44:15.267536   97564 command_runner.go:130] > # its default mounts from the following two files:
	I1127 23:44:15.267545   97564 command_runner.go:130] > #
	I1127 23:44:15.267553   97564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1127 23:44:15.267561   97564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1127 23:44:15.267569   97564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1127 23:44:15.267575   97564 command_runner.go:130] > #
	I1127 23:44:15.267581   97564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1127 23:44:15.267589   97564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1127 23:44:15.267597   97564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1127 23:44:15.267604   97564 command_runner.go:130] > #      only add mounts it finds in this file.
	I1127 23:44:15.267610   97564 command_runner.go:130] > #
	I1127 23:44:15.267617   97564 command_runner.go:130] > # default_mounts_file = ""
	I1127 23:44:15.267622   97564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1127 23:44:15.267631   97564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1127 23:44:15.267637   97564 command_runner.go:130] > # pids_limit = 0
	I1127 23:44:15.267643   97564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1127 23:44:15.267651   97564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1127 23:44:15.267659   97564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1127 23:44:15.267669   97564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1127 23:44:15.267675   97564 command_runner.go:130] > # log_size_max = -1
	I1127 23:44:15.267682   97564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1127 23:44:15.267689   97564 command_runner.go:130] > # log_to_journald = false
	I1127 23:44:15.267695   97564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1127 23:44:15.267702   97564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1127 23:44:15.267707   97564 command_runner.go:130] > # Path to directory for container attach sockets.
	I1127 23:44:15.267714   97564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1127 23:44:15.267719   97564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1127 23:44:15.267726   97564 command_runner.go:130] > # bind_mount_prefix = ""
	I1127 23:44:15.267733   97564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1127 23:44:15.267739   97564 command_runner.go:130] > # read_only = false
	I1127 23:44:15.267745   97564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1127 23:44:15.267753   97564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1127 23:44:15.267760   97564 command_runner.go:130] > # live configuration reload.
	I1127 23:44:15.267765   97564 command_runner.go:130] > # log_level = "info"
	I1127 23:44:15.267772   97564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1127 23:44:15.267780   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:15.267783   97564 command_runner.go:130] > # log_filter = ""
	I1127 23:44:15.267790   97564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1127 23:44:15.267798   97564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1127 23:44:15.267803   97564 command_runner.go:130] > # separated by comma.
	I1127 23:44:15.267807   97564 command_runner.go:130] > # uid_mappings = ""
	I1127 23:44:15.267815   97564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1127 23:44:15.267823   97564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1127 23:44:15.267829   97564 command_runner.go:130] > # separated by comma.
	I1127 23:44:15.267834   97564 command_runner.go:130] > # gid_mappings = ""
	I1127 23:44:15.267842   97564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1127 23:44:15.267850   97564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:44:15.267859   97564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:44:15.267865   97564 command_runner.go:130] > # minimum_mappable_uid = -1
	I1127 23:44:15.267871   97564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1127 23:44:15.267880   97564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:44:15.267888   97564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:44:15.267894   97564 command_runner.go:130] > # minimum_mappable_gid = -1
	I1127 23:44:15.267900   97564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1127 23:44:15.267908   97564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1127 23:44:15.267916   97564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1127 23:44:15.267922   97564 command_runner.go:130] > # ctr_stop_timeout = 30
	I1127 23:44:15.267928   97564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1127 23:44:15.267937   97564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1127 23:44:15.267945   97564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1127 23:44:15.267949   97564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1127 23:44:15.267955   97564 command_runner.go:130] > # drop_infra_ctr = true
	I1127 23:44:15.267962   97564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1127 23:44:15.267969   97564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1127 23:44:15.267982   97564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1127 23:44:15.267989   97564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1127 23:44:15.267994   97564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1127 23:44:15.268001   97564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1127 23:44:15.268006   97564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1127 23:44:15.268015   97564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1127 23:44:15.268021   97564 command_runner.go:130] > # pinns_path = ""
	I1127 23:44:15.268027   97564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1127 23:44:15.268035   97564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1127 23:44:15.268043   97564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1127 23:44:15.268050   97564 command_runner.go:130] > # default_runtime = "runc"
	I1127 23:44:15.268055   97564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1127 23:44:15.268074   97564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1127 23:44:15.268085   97564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1127 23:44:15.268090   97564 command_runner.go:130] > # creation as a file is not desired either.
	I1127 23:44:15.268100   97564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1127 23:44:15.268107   97564 command_runner.go:130] > # the hostname is being managed dynamically.
	I1127 23:44:15.268112   97564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1127 23:44:15.268119   97564 command_runner.go:130] > # ]
	I1127 23:44:15.268128   97564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1127 23:44:15.268136   97564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1127 23:44:15.268145   97564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1127 23:44:15.268153   97564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1127 23:44:15.268159   97564 command_runner.go:130] > #
	I1127 23:44:15.268163   97564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1127 23:44:15.268170   97564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1127 23:44:15.268174   97564 command_runner.go:130] > #  runtime_type = "oci"
	I1127 23:44:15.268182   97564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1127 23:44:15.268189   97564 command_runner.go:130] > #  privileged_without_host_devices = false
	I1127 23:44:15.268193   97564 command_runner.go:130] > #  allowed_annotations = []
	I1127 23:44:15.268199   97564 command_runner.go:130] > # Where:
	I1127 23:44:15.268205   97564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1127 23:44:15.268213   97564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1127 23:44:15.268221   97564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1127 23:44:15.268230   97564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1127 23:44:15.268236   97564 command_runner.go:130] > #   in $PATH.
	I1127 23:44:15.268244   97564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1127 23:44:15.268253   97564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1127 23:44:15.268264   97564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1127 23:44:15.268270   97564 command_runner.go:130] > #   state.
	I1127 23:44:15.268277   97564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1127 23:44:15.268284   97564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1127 23:44:15.268291   97564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1127 23:44:15.268299   97564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1127 23:44:15.268307   97564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1127 23:44:15.268314   97564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1127 23:44:15.268321   97564 command_runner.go:130] > #   The currently recognized values are:
	I1127 23:44:15.268327   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1127 23:44:15.268336   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1127 23:44:15.268343   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1127 23:44:15.268351   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1127 23:44:15.268360   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1127 23:44:15.268368   97564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1127 23:44:15.268376   97564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1127 23:44:15.268387   97564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1127 23:44:15.268395   97564 command_runner.go:130] > #   should be moved to the container's cgroup
	I1127 23:44:15.268402   97564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1127 23:44:15.268408   97564 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1127 23:44:15.268414   97564 command_runner.go:130] > runtime_type = "oci"
	I1127 23:44:15.268418   97564 command_runner.go:130] > runtime_root = "/run/runc"
	I1127 23:44:15.268424   97564 command_runner.go:130] > runtime_config_path = ""
	I1127 23:44:15.268429   97564 command_runner.go:130] > monitor_path = ""
	I1127 23:44:15.268435   97564 command_runner.go:130] > monitor_cgroup = ""
	I1127 23:44:15.268439   97564 command_runner.go:130] > monitor_exec_cgroup = ""
	I1127 23:44:15.268491   97564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1127 23:44:15.268499   97564 command_runner.go:130] > # running containers
	I1127 23:44:15.268503   97564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1127 23:44:15.268509   97564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1127 23:44:15.268515   97564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1127 23:44:15.268523   97564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1127 23:44:15.268529   97564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1127 23:44:15.268536   97564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1127 23:44:15.268547   97564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1127 23:44:15.268554   97564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1127 23:44:15.268559   97564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1127 23:44:15.268565   97564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1127 23:44:15.268572   97564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1127 23:44:15.268579   97564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1127 23:44:15.268588   97564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1127 23:44:15.268597   97564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1127 23:44:15.268605   97564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1127 23:44:15.268612   97564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1127 23:44:15.268624   97564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1127 23:44:15.268634   97564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1127 23:44:15.268641   97564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1127 23:44:15.268649   97564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1127 23:44:15.268655   97564 command_runner.go:130] > # Example:
	I1127 23:44:15.268660   97564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1127 23:44:15.268667   97564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1127 23:44:15.268672   97564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1127 23:44:15.268681   97564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1127 23:44:15.268685   97564 command_runner.go:130] > # cpuset = 0
	I1127 23:44:15.268692   97564 command_runner.go:130] > # cpushares = "0-1"
	I1127 23:44:15.268696   97564 command_runner.go:130] > # Where:
	I1127 23:44:15.268703   97564 command_runner.go:130] > # The workload name is workload-type.
	I1127 23:44:15.268709   97564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1127 23:44:15.268717   97564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1127 23:44:15.268724   97564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1127 23:44:15.268732   97564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1127 23:44:15.268742   97564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1127 23:44:15.268748   97564 command_runner.go:130] > # 
	I1127 23:44:15.268754   97564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1127 23:44:15.268760   97564 command_runner.go:130] > #
	I1127 23:44:15.268765   97564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1127 23:44:15.268773   97564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1127 23:44:15.268779   97564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1127 23:44:15.268788   97564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1127 23:44:15.268795   97564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1127 23:44:15.268804   97564 command_runner.go:130] > [crio.image]
	I1127 23:44:15.268812   97564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1127 23:44:15.268819   97564 command_runner.go:130] > # default_transport = "docker://"
	I1127 23:44:15.268825   97564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1127 23:44:15.268833   97564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:44:15.268840   97564 command_runner.go:130] > # global_auth_file = ""
	I1127 23:44:15.268845   97564 command_runner.go:130] > # The image used to instantiate infra containers.
	I1127 23:44:15.268852   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:15.268857   97564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1127 23:44:15.268865   97564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1127 23:44:15.268873   97564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:44:15.268880   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:15.268884   97564 command_runner.go:130] > # pause_image_auth_file = ""
	I1127 23:44:15.268892   97564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1127 23:44:15.268899   97564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1127 23:44:15.268907   97564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1127 23:44:15.268915   97564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1127 23:44:15.268921   97564 command_runner.go:130] > # pause_command = "/pause"
	I1127 23:44:15.268929   97564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1127 23:44:15.268936   97564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1127 23:44:15.268947   97564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1127 23:44:15.268955   97564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1127 23:44:15.268962   97564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1127 23:44:15.268970   97564 command_runner.go:130] > # signature_policy = ""
	I1127 23:44:15.268979   97564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1127 23:44:15.268987   97564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1127 23:44:15.268994   97564 command_runner.go:130] > # changing them here.
	I1127 23:44:15.269005   97564 command_runner.go:130] > # insecure_registries = [
	I1127 23:44:15.269010   97564 command_runner.go:130] > # ]
	I1127 23:44:15.269016   97564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1127 23:44:15.269023   97564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1127 23:44:15.269027   97564 command_runner.go:130] > # image_volumes = "mkdir"
	I1127 23:44:15.269034   97564 command_runner.go:130] > # Temporary directory to use for storing big files
	I1127 23:44:15.269039   97564 command_runner.go:130] > # big_files_temporary_dir = ""
	I1127 23:44:15.269047   97564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1127 23:44:15.269053   97564 command_runner.go:130] > # CNI plugins.
	I1127 23:44:15.269060   97564 command_runner.go:130] > [crio.network]
	I1127 23:44:15.269068   97564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1127 23:44:15.269075   97564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1127 23:44:15.269082   97564 command_runner.go:130] > # cni_default_network = ""
	I1127 23:44:15.269088   97564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1127 23:44:15.269094   97564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1127 23:44:15.269100   97564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1127 23:44:15.269106   97564 command_runner.go:130] > # plugin_dirs = [
	I1127 23:44:15.269110   97564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1127 23:44:15.269115   97564 command_runner.go:130] > # ]
	I1127 23:44:15.269121   97564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1127 23:44:15.269127   97564 command_runner.go:130] > [crio.metrics]
	I1127 23:44:15.269132   97564 command_runner.go:130] > # Globally enable or disable metrics support.
	I1127 23:44:15.269139   97564 command_runner.go:130] > # enable_metrics = false
	I1127 23:44:15.269144   97564 command_runner.go:130] > # Specify enabled metrics collectors.
	I1127 23:44:15.269151   97564 command_runner.go:130] > # Per default all metrics are enabled.
	I1127 23:44:15.269157   97564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1127 23:44:15.269166   97564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1127 23:44:15.269176   97564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1127 23:44:15.269183   97564 command_runner.go:130] > # metrics_collectors = [
	I1127 23:44:15.269187   97564 command_runner.go:130] > # 	"operations",
	I1127 23:44:15.269194   97564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1127 23:44:15.269201   97564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1127 23:44:15.269205   97564 command_runner.go:130] > # 	"operations_errors",
	I1127 23:44:15.269211   97564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1127 23:44:15.269216   97564 command_runner.go:130] > # 	"image_pulls_by_name",
	I1127 23:44:15.269222   97564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1127 23:44:15.269226   97564 command_runner.go:130] > # 	"image_pulls_failures",
	I1127 23:44:15.269233   97564 command_runner.go:130] > # 	"image_pulls_successes",
	I1127 23:44:15.269237   97564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1127 23:44:15.269243   97564 command_runner.go:130] > # 	"image_layer_reuse",
	I1127 23:44:15.269247   97564 command_runner.go:130] > # 	"containers_oom_total",
	I1127 23:44:15.269253   97564 command_runner.go:130] > # 	"containers_oom",
	I1127 23:44:15.269257   97564 command_runner.go:130] > # 	"processes_defunct",
	I1127 23:44:15.269264   97564 command_runner.go:130] > # 	"operations_total",
	I1127 23:44:15.269268   97564 command_runner.go:130] > # 	"operations_latency_seconds",
	I1127 23:44:15.269279   97564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1127 23:44:15.269286   97564 command_runner.go:130] > # 	"operations_errors_total",
	I1127 23:44:15.269290   97564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1127 23:44:15.269297   97564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1127 23:44:15.269301   97564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1127 23:44:15.269308   97564 command_runner.go:130] > # 	"image_pulls_success_total",
	I1127 23:44:15.269312   97564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1127 23:44:15.269319   97564 command_runner.go:130] > # 	"containers_oom_count_total",
	I1127 23:44:15.269322   97564 command_runner.go:130] > # ]
	I1127 23:44:15.269328   97564 command_runner.go:130] > # The port on which the metrics server will listen.
	I1127 23:44:15.269334   97564 command_runner.go:130] > # metrics_port = 9090
	I1127 23:44:15.269339   97564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1127 23:44:15.269346   97564 command_runner.go:130] > # metrics_socket = ""
	I1127 23:44:15.269351   97564 command_runner.go:130] > # The certificate for the secure metrics server.
	I1127 23:44:15.269357   97564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1127 23:44:15.269365   97564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1127 23:44:15.269372   97564 command_runner.go:130] > # certificate on any modification event.
	I1127 23:44:15.269376   97564 command_runner.go:130] > # metrics_cert = ""
	I1127 23:44:15.269385   97564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1127 23:44:15.269393   97564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1127 23:44:15.269399   97564 command_runner.go:130] > # metrics_key = ""
	I1127 23:44:15.269405   97564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1127 23:44:15.269411   97564 command_runner.go:130] > [crio.tracing]
	I1127 23:44:15.269416   97564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1127 23:44:15.269422   97564 command_runner.go:130] > # enable_tracing = false
	I1127 23:44:15.269428   97564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1127 23:44:15.269435   97564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1127 23:44:15.269440   97564 command_runner.go:130] > # Number of samples to collect per million spans.
	I1127 23:44:15.269447   97564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1127 23:44:15.269453   97564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1127 23:44:15.269458   97564 command_runner.go:130] > [crio.stats]
	I1127 23:44:15.269464   97564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1127 23:44:15.269471   97564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1127 23:44:15.269478   97564 command_runner.go:130] > # stats_collection_period = 0
	I1127 23:44:15.269551   97564 cni.go:84] Creating CNI manager for ""
	I1127 23:44:15.269561   97564 cni.go:136] 1 nodes found, recommending kindnet
	I1127 23:44:15.269579   97564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:44:15.269598   97564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-595051 NodeName:multinode-595051 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:44:15.269715   97564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-595051"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:44:15.269779   97564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-595051 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-595051 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:44:15.269827   97564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:44:15.277352   97564 command_runner.go:130] > kubeadm
	I1127 23:44:15.277372   97564 command_runner.go:130] > kubectl
	I1127 23:44:15.277376   97564 command_runner.go:130] > kubelet
	I1127 23:44:15.277999   97564 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:44:15.278091   97564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:44:15.285546   97564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1127 23:44:15.300590   97564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:44:15.315701   97564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1127 23:44:15.331091   97564 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:44:15.334266   97564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:44:15.343987   97564 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051 for IP: 192.168.58.2
	I1127 23:44:15.344023   97564 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a5db8f506dfbef3cb84c722632fd59c37603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:15.344164   97564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key
	I1127 23:44:15.344220   97564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key
	I1127 23:44:15.344278   97564 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.key
	I1127 23:44:15.344300   97564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.crt with IP's: []
	I1127 23:44:15.459653   97564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.crt ...
	I1127 23:44:15.459685   97564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.crt: {Name:mk0cccbb2dd9e07e3da52b3ce18d151bb69524a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:15.459877   97564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.key ...
	I1127 23:44:15.459893   97564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.key: {Name:mkbf151d392c1ee718aabb624620c0dfafdef532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:15.459989   97564 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.key.cee25041
	I1127 23:44:15.460007   97564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:44:15.520369   97564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.crt.cee25041 ...
	I1127 23:44:15.520406   97564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.crt.cee25041: {Name:mk6d0ca544e3342c933a9483624bd2bced1400c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:15.520582   97564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.key.cee25041 ...
	I1127 23:44:15.520599   97564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.key.cee25041: {Name:mk4c60923440e5037bf73d70f52b6e62534c3637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:15.520697   97564 certs.go:337] copying /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.crt
	I1127 23:44:15.520785   97564 certs.go:341] copying /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.key
	I1127 23:44:15.520862   97564 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.key
	I1127 23:44:15.520884   97564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.crt with IP's: []
	I1127 23:44:15.662123   97564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.crt ...
	I1127 23:44:15.662162   97564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.crt: {Name:mk36b6e83e447e5f10acad60333a9abd063d0e56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:15.662355   97564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.key ...
	I1127 23:44:15.662389   97564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.key: {Name:mkac374c4d4045dde03b379f427afeb0f86075dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:15.662485   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 23:44:15.662508   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 23:44:15.662523   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 23:44:15.662540   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 23:44:15.662567   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:44:15.662586   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:44:15.662602   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:44:15.662660   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:44:15.662740   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem (1338 bytes)
	W1127 23:44:15.662792   97564 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306_empty.pem, impossibly tiny 0 bytes
	I1127 23:44:15.662807   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:44:15.662847   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:44:15.662881   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:44:15.662917   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem (1679 bytes)
	I1127 23:44:15.662987   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem (1708 bytes)
	I1127 23:44:15.663029   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem -> /usr/share/ca-certificates/11306.pem
	I1127 23:44:15.663050   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> /usr/share/ca-certificates/113062.pem
	I1127 23:44:15.663071   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:15.663701   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:44:15.685369   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1127 23:44:15.705986   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:44:15.726236   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1127 23:44:15.746848   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:44:15.767300   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 23:44:15.788003   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:44:15.808744   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 23:44:15.829685   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem --> /usr/share/ca-certificates/11306.pem (1338 bytes)
	I1127 23:44:15.850615   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem --> /usr/share/ca-certificates/113062.pem (1708 bytes)
	I1127 23:44:15.871761   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:44:15.893234   97564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:44:15.909136   97564 ssh_runner.go:195] Run: openssl version
	I1127 23:44:15.914141   97564 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1127 23:44:15.914240   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11306.pem && ln -fs /usr/share/ca-certificates/11306.pem /etc/ssl/certs/11306.pem"
	I1127 23:44:15.923498   97564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11306.pem
	I1127 23:44:15.926970   97564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:31 /usr/share/ca-certificates/11306.pem
	I1127 23:44:15.927005   97564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:31 /usr/share/ca-certificates/11306.pem
	I1127 23:44:15.927049   97564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11306.pem
	I1127 23:44:15.933639   97564 command_runner.go:130] > 51391683
	I1127 23:44:15.933766   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11306.pem /etc/ssl/certs/51391683.0"
	I1127 23:44:15.942354   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113062.pem && ln -fs /usr/share/ca-certificates/113062.pem /etc/ssl/certs/113062.pem"
	I1127 23:44:15.950671   97564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113062.pem
	I1127 23:44:15.953650   97564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:31 /usr/share/ca-certificates/113062.pem
	I1127 23:44:15.953676   97564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:31 /usr/share/ca-certificates/113062.pem
	I1127 23:44:15.953721   97564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113062.pem
	I1127 23:44:15.959631   97564 command_runner.go:130] > 3ec20f2e
	I1127 23:44:15.959820   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/113062.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:44:15.967970   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:44:15.976042   97564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:15.979216   97564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:25 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:15.979246   97564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:25 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:15.979281   97564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:15.985234   97564 command_runner.go:130] > b5213941
	I1127 23:44:15.985389   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:44:15.993682   97564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:44:15.996556   97564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:44:15.996602   97564 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:44:15.996637   97564 kubeadm.go:404] StartCluster: {Name:multinode-595051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-595051 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:44:15.996724   97564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:44:15.996757   97564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:44:16.029017   97564 cri.go:89] found id: ""
	I1127 23:44:16.029104   97564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:44:16.037538   97564 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1127 23:44:16.037569   97564 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1127 23:44:16.037580   97564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1127 23:44:16.037641   97564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:44:16.045508   97564 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 23:44:16.045570   97564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:44:16.053068   97564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1127 23:44:16.053092   97564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1127 23:44:16.053099   97564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1127 23:44:16.053107   97564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:44:16.053135   97564 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:44:16.053172   97564 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 23:44:16.131850   97564 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 23:44:16.131898   97564 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 23:44:16.195248   97564 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:44:16.195277   97564 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:44:24.756024   97564 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 23:44:24.756061   97564 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1127 23:44:24.756111   97564 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:44:24.756122   97564 command_runner.go:130] > [preflight] Running pre-flight checks
	I1127 23:44:24.756222   97564 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:44:24.756237   97564 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:44:24.756304   97564 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 23:44:24.756315   97564 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 23:44:24.756360   97564 kubeadm.go:322] OS: Linux
	I1127 23:44:24.756371   97564 command_runner.go:130] > OS: Linux
	I1127 23:44:24.756430   97564 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 23:44:24.756453   97564 command_runner.go:130] > CGROUPS_CPU: enabled
	I1127 23:44:24.756516   97564 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 23:44:24.756529   97564 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1127 23:44:24.756588   97564 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 23:44:24.756599   97564 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1127 23:44:24.756658   97564 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 23:44:24.756671   97564 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1127 23:44:24.756730   97564 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 23:44:24.756739   97564 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1127 23:44:24.756795   97564 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 23:44:24.756809   97564 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1127 23:44:24.756867   97564 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1127 23:44:24.756874   97564 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1127 23:44:24.756936   97564 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1127 23:44:24.756944   97564 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1127 23:44:24.757001   97564 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1127 23:44:24.757008   97564 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1127 23:44:24.757096   97564 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:44:24.757117   97564 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:44:24.757220   97564 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:44:24.757229   97564 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:44:24.757330   97564 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:44:24.757339   97564 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:44:24.757409   97564 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:44:24.759181   97564 out.go:204]   - Generating certificates and keys ...
	I1127 23:44:24.757549   97564 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:44:24.759347   97564 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:44:24.759385   97564 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1127 23:44:24.759496   97564 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:44:24.759514   97564 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1127 23:44:24.759633   97564 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:44:24.759659   97564 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:44:24.759731   97564 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:44:24.759742   97564 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:44:24.759824   97564 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:44:24.759832   97564 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1127 23:44:24.759901   97564 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:44:24.759909   97564 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1127 23:44:24.759974   97564 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:44:24.759981   97564 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1127 23:44:24.760132   97564 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-595051] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 23:44:24.760140   97564 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-595051] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 23:44:24.760195   97564 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:44:24.760202   97564 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1127 23:44:24.760354   97564 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-595051] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 23:44:24.760362   97564 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-595051] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 23:44:24.760441   97564 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:44:24.760449   97564 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:44:24.760519   97564 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:44:24.760540   97564 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:44:24.760614   97564 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:44:24.760626   97564 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1127 23:44:24.760698   97564 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:44:24.760721   97564 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:44:24.760818   97564 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:44:24.760870   97564 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:44:24.760982   97564 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:44:24.761035   97564 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:44:24.761184   97564 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:44:24.761208   97564 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:44:24.761297   97564 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:44:24.761334   97564 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:44:24.761476   97564 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:44:24.761499   97564 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:44:24.761620   97564 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:44:24.763910   97564 out.go:204]   - Booting up control plane ...
	I1127 23:44:24.761683   97564 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:44:24.764045   97564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:44:24.764071   97564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:44:24.764184   97564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:44:24.764200   97564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:44:24.764286   97564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:44:24.764298   97564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:44:24.764429   97564 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:44:24.764455   97564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:44:24.764560   97564 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:44:24.764573   97564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:44:24.764622   97564 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:44:24.764632   97564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1127 23:44:24.764819   97564 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:44:24.764833   97564 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:44:24.764922   97564 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002194 seconds
	I1127 23:44:24.764933   97564 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002194 seconds
	I1127 23:44:24.765078   97564 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:44:24.765088   97564 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:44:24.765250   97564 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:44:24.765270   97564 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:44:24.765345   97564 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:44:24.765353   97564 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:44:24.765590   97564 kubeadm.go:322] [mark-control-plane] Marking the node multinode-595051 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:44:24.765603   97564 command_runner.go:130] > [mark-control-plane] Marking the node multinode-595051 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:44:24.765674   97564 kubeadm.go:322] [bootstrap-token] Using token: k7uyef.v1sogi7cmymq0c0v
	I1127 23:44:24.767213   97564 out.go:204]   - Configuring RBAC rules ...
	I1127 23:44:24.765792   97564 command_runner.go:130] > [bootstrap-token] Using token: k7uyef.v1sogi7cmymq0c0v
	I1127 23:44:24.767377   97564 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:44:24.767404   97564 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:44:24.767521   97564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:44:24.767534   97564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:44:24.767735   97564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:44:24.767752   97564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:44:24.767920   97564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:44:24.767927   97564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:44:24.768029   97564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:44:24.768059   97564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:44:24.768212   97564 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:44:24.768220   97564 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:44:24.768375   97564 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:44:24.768395   97564 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:44:24.768480   97564 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:44:24.768500   97564 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1127 23:44:24.768569   97564 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:44:24.768580   97564 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1127 23:44:24.768591   97564 kubeadm.go:322] 
	I1127 23:44:24.768679   97564 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1127 23:44:24.768692   97564 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:44:24.768701   97564 kubeadm.go:322] 
	I1127 23:44:24.768803   97564 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1127 23:44:24.768812   97564 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:44:24.768815   97564 kubeadm.go:322] 
	I1127 23:44:24.768836   97564 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1127 23:44:24.768842   97564 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:44:24.768927   97564 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:44:24.768939   97564 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:44:24.769007   97564 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:44:24.769017   97564 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:44:24.769023   97564 kubeadm.go:322] 
	I1127 23:44:24.769096   97564 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1127 23:44:24.769107   97564 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 23:44:24.769114   97564 kubeadm.go:322] 
	I1127 23:44:24.769206   97564 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:44:24.769220   97564 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:44:24.769226   97564 kubeadm.go:322] 
	I1127 23:44:24.769292   97564 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1127 23:44:24.769301   97564 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:44:24.769416   97564 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:44:24.769459   97564 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:44:24.769546   97564 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:44:24.769558   97564 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:44:24.769563   97564 kubeadm.go:322] 
	I1127 23:44:24.769655   97564 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:44:24.769666   97564 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:44:24.769751   97564 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1127 23:44:24.769763   97564 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:44:24.769768   97564 kubeadm.go:322] 
	I1127 23:44:24.769861   97564 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token k7uyef.v1sogi7cmymq0c0v \
	I1127 23:44:24.769871   97564 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k7uyef.v1sogi7cmymq0c0v \
	I1127 23:44:24.770005   97564 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 \
	I1127 23:44:24.770015   97564 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 \
	I1127 23:44:24.770042   97564 command_runner.go:130] > 	--control-plane 
	I1127 23:44:24.770110   97564 kubeadm.go:322] 	--control-plane 
	I1127 23:44:24.770121   97564 kubeadm.go:322] 
	I1127 23:44:24.770223   97564 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:44:24.770231   97564 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:44:24.770240   97564 kubeadm.go:322] 
	I1127 23:44:24.770305   97564 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token k7uyef.v1sogi7cmymq0c0v \
	I1127 23:44:24.770311   97564 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k7uyef.v1sogi7cmymq0c0v \
	I1127 23:44:24.770390   97564 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 
	I1127 23:44:24.770407   97564 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 
	I1127 23:44:24.770416   97564 cni.go:84] Creating CNI manager for ""
	I1127 23:44:24.770424   97564 cni.go:136] 1 nodes found, recommending kindnet
	I1127 23:44:24.771960   97564 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 23:44:24.773385   97564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:44:24.776861   97564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1127 23:44:24.776879   97564 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1127 23:44:24.776886   97564 command_runner.go:130] > Device: 37h/55d	Inode: 545259      Links: 1
	I1127 23:44:24.776892   97564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:44:24.776899   97564 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1127 23:44:24.776907   97564 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1127 23:44:24.776914   97564 command_runner.go:130] > Change: 2023-11-27 23:25:11.484300745 +0000
	I1127 23:44:24.776925   97564 command_runner.go:130] >  Birth: 2023-11-27 23:25:11.460298307 +0000
	I1127 23:44:24.776986   97564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 23:44:24.776997   97564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:44:24.846562   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:44:25.418912   97564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1127 23:44:25.425096   97564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1127 23:44:25.431610   97564 command_runner.go:130] > serviceaccount/kindnet created
	I1127 23:44:25.440213   97564 command_runner.go:130] > daemonset.apps/kindnet created
	I1127 23:44:25.444174   97564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:44:25.444241   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:25.444287   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=multinode-595051 minikube.k8s.io/updated_at=2023_11_27T23_44_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:25.451997   97564 command_runner.go:130] > -16
	I1127 23:44:25.452056   97564 ops.go:34] apiserver oom_adj: -16
	I1127 23:44:25.553647   97564 command_runner.go:130] > node/multinode-595051 labeled
	I1127 23:44:25.553719   97564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1127 23:44:25.553801   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:25.617360   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:25.620248   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:25.771311   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:26.275187   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:26.340021   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:26.775311   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:26.839055   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:27.275351   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:27.340317   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:27.774870   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:27.837510   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:28.275366   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:28.338290   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:28.774771   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:28.839401   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:29.274998   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:29.339219   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:29.774692   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:29.836025   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:30.275161   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:30.339747   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:30.775290   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:30.837050   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:31.275174   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:31.337770   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:31.774867   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:31.835893   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:32.274636   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:32.336112   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:32.775047   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:32.836903   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:33.275272   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:33.336434   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:33.774799   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:33.839762   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:34.275439   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:34.340205   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:34.774797   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:34.836341   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:35.275255   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:35.337659   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:35.775108   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:35.836425   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:36.274589   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:36.336896   97564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:44:36.775167   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:44:36.957158   97564 command_runner.go:130] > NAME      SECRETS   AGE
	I1127 23:44:36.957181   97564 command_runner.go:130] > default   0         0s
	I1127 23:44:36.960013   97564 kubeadm.go:1081] duration metric: took 11.515828096s to wait for elevateKubeSystemPrivileges.
	I1127 23:44:36.960049   97564 kubeadm.go:406] StartCluster complete in 20.963412885s
	I1127 23:44:36.960071   97564 settings.go:142] acquiring lock: {Name:mk8cf64b397eda9c03dbd178fc3aefd4ce90283a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:36.960139   97564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:44:36.960805   97564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4554/kubeconfig: {Name:mkeacc22f444b1cc5befda4f2c22a9fc66e858ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:36.961036   97564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:44:36.961051   97564 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 23:44:36.961119   97564 addons.go:69] Setting storage-provisioner=true in profile "multinode-595051"
	I1127 23:44:36.961144   97564 addons.go:231] Setting addon storage-provisioner=true in "multinode-595051"
	I1127 23:44:36.961151   97564 addons.go:69] Setting default-storageclass=true in profile "multinode-595051"
	I1127 23:44:36.961171   97564 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-595051"
	I1127 23:44:36.961211   97564 host.go:66] Checking if "multinode-595051" exists ...
	I1127 23:44:36.961231   97564 config.go:182] Loaded profile config "multinode-595051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:44:36.961417   97564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:44:36.961643   97564 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Status}}
	I1127 23:44:36.961806   97564 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Status}}
	I1127 23:44:36.961803   97564 kapi.go:59] client config for multinode-595051: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:44:36.962592   97564 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 23:44:36.962836   97564 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:44:36.962851   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:36.962860   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:36.962865   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:36.972426   97564 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1127 23:44:36.972503   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:36.972527   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:36 GMT
	I1127 23:44:36.972547   97564 round_trippers.go:580]     Audit-Id: 82056cd9-24e9-48d8-8be4-5eb54255844f
	I1127 23:44:36.972565   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:36.972582   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:36.972598   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:36.972617   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:36.972638   97564 round_trippers.go:580]     Content-Length: 291
	I1127 23:44:36.972691   97564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"26a7bde8-57dd-4f08-8c71-2df4ee1c3187","resourceVersion":"234","creationTimestamp":"2023-11-27T23:44:24Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1127 23:44:36.973106   97564 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"26a7bde8-57dd-4f08-8c71-2df4ee1c3187","resourceVersion":"234","creationTimestamp":"2023-11-27T23:44:24Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1127 23:44:36.973195   97564 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:44:36.973220   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:36.973238   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:36.973254   97564 round_trippers.go:473]     Content-Type: application/json
	I1127 23:44:36.973265   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:36.980342   97564 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1127 23:44:36.980369   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:36.980382   97564 round_trippers.go:580]     Audit-Id: 45843e4e-0622-4658-88d3-a5c9626beba5
	I1127 23:44:36.980391   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:36.980400   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:36.980409   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:36.980418   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:36.980432   97564 round_trippers.go:580]     Content-Length: 291
	I1127 23:44:36.980445   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:36 GMT
	I1127 23:44:36.980479   97564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"26a7bde8-57dd-4f08-8c71-2df4ee1c3187","resourceVersion":"322","creationTimestamp":"2023-11-27T23:44:24Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1127 23:44:36.980670   97564 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:44:36.980687   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:36.980696   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:36.980707   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:36.982366   97564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:44:36.982651   97564 kapi.go:59] client config for multinode-595051: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:44:36.982997   97564 addons.go:231] Setting addon default-storageclass=true in "multinode-595051"
	I1127 23:44:36.983031   97564 host.go:66] Checking if "multinode-595051" exists ...
	I1127 23:44:36.983192   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:36.983215   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:36.983230   97564 round_trippers.go:580]     Audit-Id: 9d9fa123-7b1a-4b37-8bf2-d40e33bf27d9
	I1127 23:44:36.983245   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:36.983257   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:36.983268   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:36.983279   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:36.983295   97564 round_trippers.go:580]     Content-Length: 291
	I1127 23:44:36.983307   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:36 GMT
	I1127 23:44:36.983339   97564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"26a7bde8-57dd-4f08-8c71-2df4ee1c3187","resourceVersion":"322","creationTimestamp":"2023-11-27T23:44:24Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1127 23:44:36.983448   97564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-595051" context rescaled to 1 replicas
	I1127 23:44:36.983488   97564 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:44:36.985847   97564 out.go:177] * Verifying Kubernetes components...
	I1127 23:44:36.983458   97564 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Status}}
	I1127 23:44:36.988950   97564 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:44:36.987497   97564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:44:36.990550   97564 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:44:36.990566   97564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:44:36.990611   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:37.007879   97564 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:44:37.007910   97564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:44:37.007969   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:37.010380   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:44:37.024116   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:44:37.070791   97564 command_runner.go:130] > apiVersion: v1
	I1127 23:44:37.070817   97564 command_runner.go:130] > data:
	I1127 23:44:37.070823   97564 command_runner.go:130] >   Corefile: |
	I1127 23:44:37.070829   97564 command_runner.go:130] >     .:53 {
	I1127 23:44:37.070835   97564 command_runner.go:130] >         errors
	I1127 23:44:37.070842   97564 command_runner.go:130] >         health {
	I1127 23:44:37.070849   97564 command_runner.go:130] >            lameduck 5s
	I1127 23:44:37.070862   97564 command_runner.go:130] >         }
	I1127 23:44:37.070865   97564 command_runner.go:130] >         ready
	I1127 23:44:37.070871   97564 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1127 23:44:37.070876   97564 command_runner.go:130] >            pods insecure
	I1127 23:44:37.070882   97564 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1127 23:44:37.070890   97564 command_runner.go:130] >            ttl 30
	I1127 23:44:37.070897   97564 command_runner.go:130] >         }
	I1127 23:44:37.070901   97564 command_runner.go:130] >         prometheus :9153
	I1127 23:44:37.070906   97564 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1127 23:44:37.070913   97564 command_runner.go:130] >            max_concurrent 1000
	I1127 23:44:37.070917   97564 command_runner.go:130] >         }
	I1127 23:44:37.070932   97564 command_runner.go:130] >         cache 30
	I1127 23:44:37.070939   97564 command_runner.go:130] >         loop
	I1127 23:44:37.070943   97564 command_runner.go:130] >         reload
	I1127 23:44:37.070951   97564 command_runner.go:130] >         loadbalance
	I1127 23:44:37.070954   97564 command_runner.go:130] >     }
	I1127 23:44:37.070958   97564 command_runner.go:130] > kind: ConfigMap
	I1127 23:44:37.070962   97564 command_runner.go:130] > metadata:
	I1127 23:44:37.070967   97564 command_runner.go:130] >   creationTimestamp: "2023-11-27T23:44:24Z"
	I1127 23:44:37.070971   97564 command_runner.go:130] >   name: coredns
	I1127 23:44:37.070977   97564 command_runner.go:130] >   namespace: kube-system
	I1127 23:44:37.070982   97564 command_runner.go:130] >   resourceVersion: "230"
	I1127 23:44:37.070989   97564 command_runner.go:130] >   uid: f0b44385-22d8-47e9-b158-58e7f50fea45
	I1127 23:44:37.073968   97564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:44:37.074190   97564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:44:37.074488   97564 kapi.go:59] client config for multinode-595051: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:44:37.074736   97564 node_ready.go:35] waiting up to 6m0s for node "multinode-595051" to be "Ready" ...
	I1127 23:44:37.074811   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:37.074819   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:37.074826   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:37.074833   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:37.077571   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:37.077593   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:37.077603   97564 round_trippers.go:580]     Audit-Id: 3a263ba7-d2fd-439d-8a6f-657e22eaf7c4
	I1127 23:44:37.077610   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:37.077618   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:37.077628   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:37.077644   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:37.077656   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:37 GMT
	I1127 23:44:37.078180   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"296","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:4
4:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I1127 23:44:37.079334   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:37.079356   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:37.079367   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:37.079376   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:37.081826   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:37.081854   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:37.081876   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:37.081890   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:37 GMT
	I1127 23:44:37.081899   97564 round_trippers.go:580]     Audit-Id: 8c815f4d-d384-44cf-9f00-595367008b5f
	I1127 23:44:37.081908   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:37.081916   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:37.081924   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:37.082152   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"296","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:4
4:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I1127 23:44:37.160433   97564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:44:37.161363   97564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:44:37.582884   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:37.582913   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:37.582921   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:37.582927   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:37.652954   97564 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I1127 23:44:37.652985   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:37.652995   97564 round_trippers.go:580]     Audit-Id: 300aa5be-d766-4e20-b722-810b6fdaf53b
	I1127 23:44:37.653002   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:37.653033   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:37.653043   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:37.653068   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:37.653085   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:37 GMT
	I1127 23:44:37.653292   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"324","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1127 23:44:37.662494   97564 command_runner.go:130] > configmap/coredns replaced
	I1127 23:44:37.744064   97564 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1127 23:44:37.758940   97564 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1127 23:44:37.763203   97564 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1127 23:44:37.763225   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:37.763237   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:37.763247   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:37.765143   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:37.765171   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:37.765179   97564 round_trippers.go:580]     Audit-Id: 715f6177-65a7-4abc-9252-27c24468524b
	I1127 23:44:37.765185   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:37.765190   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:37.765195   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:37.765204   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:37.765212   97564 round_trippers.go:580]     Content-Length: 1273
	I1127 23:44:37.765219   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:37 GMT
	I1127 23:44:37.765307   97564 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[{"metadata":{"name":"standard","uid":"1becf330-989f-48f4-8cd6-32da1e6367be","resourceVersion":"358","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1127 23:44:37.765948   97564 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"1becf330-989f-48f4-8cd6-32da1e6367be","resourceVersion":"358","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1127 23:44:37.766076   97564 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1127 23:44:37.766107   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:37.766129   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:37.766148   97564 round_trippers.go:473]     Content-Type: application/json
	I1127 23:44:37.766166   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:37.769038   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:37.769062   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:37.769072   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:37 GMT
	I1127 23:44:37.769081   97564 round_trippers.go:580]     Audit-Id: a6e30923-4d15-467e-9447-9d78c8520d97
	I1127 23:44:37.769090   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:37.769100   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:37.769110   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:37.769116   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:37.769127   97564 round_trippers.go:580]     Content-Length: 1220
	I1127 23:44:37.769157   97564 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"1becf330-989f-48f4-8cd6-32da1e6367be","resourceVersion":"358","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1127 23:44:37.989827   97564 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1127 23:44:37.997222   97564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1127 23:44:38.005995   97564 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1127 23:44:38.051290   97564 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1127 23:44:38.063133   97564 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1127 23:44:38.075221   97564 command_runner.go:130] > pod/storage-provisioner created
	I1127 23:44:38.081740   97564 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1127 23:44:38.083483   97564 addons.go:502] enable addons completed in 1.122421909s: enabled=[default-storageclass storage-provisioner]
	I1127 23:44:38.083502   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:38.083535   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:38.083548   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:38.083556   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:38.089097   97564 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1127 23:44:38.089131   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:38.089141   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:38.089164   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:38.089172   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:38 GMT
	I1127 23:44:38.089180   97564 round_trippers.go:580]     Audit-Id: a03ab795-b079-4e4c-99e0-91bdba408e68
	I1127 23:44:38.089188   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:38.089197   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:38.089344   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"324","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1127 23:44:38.582866   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:38.582890   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:38.582898   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:38.582906   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:38.585338   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:38.585360   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:38.585370   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:38.585376   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:38.585385   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:38.585392   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:38.585400   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:38 GMT
	I1127 23:44:38.585407   97564 round_trippers.go:580]     Audit-Id: 71e2f558-309c-4e7d-a7e6-86ceb878adf7
	I1127 23:44:38.585593   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"324","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1127 23:44:39.083033   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:39.083057   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:39.083065   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:39.083071   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:39.085339   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:39.085361   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:39.085368   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:39.085374   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:39.085381   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:39 GMT
	I1127 23:44:39.085417   97564 round_trippers.go:580]     Audit-Id: 734ebdec-8795-486d-bf06-398c3e0a0510
	I1127 23:44:39.085425   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:39.085434   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:39.085578   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:39.085972   97564 node_ready.go:49] node "multinode-595051" has status "Ready":"True"
	I1127 23:44:39.085992   97564 node_ready.go:38] duration metric: took 2.01123166s waiting for node "multinode-595051" to be "Ready" ...
	I1127 23:44:39.086002   97564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:44:39.086096   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:44:39.086112   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:39.086123   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:39.086134   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:39.089136   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:39.089154   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:39.089161   97564 round_trippers.go:580]     Audit-Id: aa55db10-c587-408f-a843-2cc71479ddc1
	I1127 23:44:39.089167   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:39.089172   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:39.089178   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:39.089184   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:39.089189   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:39 GMT
	I1127 23:44:39.089682   97564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"390"},"items":[{"metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"390","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55050 chars]
	I1127 23:44:39.092642   97564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-px5k6" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:39.092712   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-px5k6
	I1127 23:44:39.092721   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:39.092728   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:39.092734   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:39.094756   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:39.094776   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:39.094783   97564 round_trippers.go:580]     Audit-Id: 7121db39-4923-4e49-960e-f68e72efbfa4
	I1127 23:44:39.094789   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:39.094797   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:39.094806   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:39.094813   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:39.094820   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:39 GMT
	I1127 23:44:39.094919   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"390","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 23:44:39.095311   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:39.095327   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:39.095334   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:39.095340   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:39.097176   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:39.097195   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:39.097201   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:39.097207   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:39.097213   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:39.097221   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:39 GMT
	I1127 23:44:39.097229   97564 round_trippers.go:580]     Audit-Id: 4fdd8c62-735e-4f39-82a7-1659396c7608
	I1127 23:44:39.097237   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:39.097350   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:39.097670   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-px5k6
	I1127 23:44:39.097681   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:39.097689   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:39.097694   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:39.099446   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:39.099468   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:39.099478   97564 round_trippers.go:580]     Audit-Id: fde752ed-a06b-4553-bc35-d2743629caf8
	I1127 23:44:39.099486   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:39.099495   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:39.099504   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:39.099509   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:39.099516   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:39 GMT
	I1127 23:44:39.099637   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"390","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 23:44:39.100036   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:39.100049   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:39.100056   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:39.100061   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:39.101748   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:39.101767   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:39.101774   97564 round_trippers.go:580]     Audit-Id: 5c01016a-5be8-4cfa-ab5c-e27e9427e50c
	I1127 23:44:39.101780   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:39.101785   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:39.101792   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:39.101801   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:39.101810   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:39 GMT
	I1127 23:44:39.101968   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:39.603143   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-px5k6
	I1127 23:44:39.603174   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:39.603182   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:39.603188   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:39.605752   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:39.605777   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:39.605784   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:39.605790   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:39.605795   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:39.605800   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:39.605805   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:39 GMT
	I1127 23:44:39.605814   97564 round_trippers.go:580]     Audit-Id: 50671c58-a71e-4c7c-8c3f-7575490dde9d
	I1127 23:44:39.605987   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"390","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 23:44:39.606442   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:39.606459   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:39.606469   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:39.606477   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:39.608404   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:39.608427   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:39.608435   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:39 GMT
	I1127 23:44:39.608441   97564 round_trippers.go:580]     Audit-Id: 004b1b38-8ae7-4ab2-8386-bb291ed08f56
	I1127 23:44:39.608450   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:39.608461   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:39.608473   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:39.608484   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:39.608630   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:40.103411   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-px5k6
	I1127 23:44:40.103443   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:40.103453   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:40.103462   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:40.105661   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:40.105682   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:40.105701   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:40.105707   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:40.105713   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:40 GMT
	I1127 23:44:40.105722   97564 round_trippers.go:580]     Audit-Id: a901df87-5841-44a2-acb1-e701629b32e1
	I1127 23:44:40.105730   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:40.105742   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:40.105879   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"397","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1127 23:44:40.106353   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:40.106372   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:40.106379   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:40.106385   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:40.108286   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:40.108305   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:40.108315   97564 round_trippers.go:580]     Audit-Id: cef6ec8b-e411-4bc0-900e-0534c1a57223
	I1127 23:44:40.108320   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:40.108325   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:40.108330   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:40.108338   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:40.108346   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:40 GMT
	I1127 23:44:40.108500   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:40.108786   97564 pod_ready.go:92] pod "coredns-5dd5756b68-px5k6" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:40.108801   97564 pod_ready.go:81] duration metric: took 1.016133601s waiting for pod "coredns-5dd5756b68-px5k6" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:40.108809   97564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:40.108855   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:40.108862   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:40.108869   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:40.108874   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:40.110473   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:40.110489   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:40.110495   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:40.110501   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:40.110506   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:40.110513   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:40.110521   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:40 GMT
	I1127 23:44:40.110533   97564 round_trippers.go:580]     Audit-Id: 77903de9-39f8-4c40-aae7-168231a7c773
	I1127 23:44:40.110656   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:40.111012   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:40.111024   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:40.111031   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:40.111037   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:40.112591   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:40.112607   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:40.112615   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:40.112623   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:40.112631   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:40.112639   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:40.112646   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:40 GMT
	I1127 23:44:40.112654   97564 round_trippers.go:580]     Audit-Id: 30f2b9be-21dc-4b25-9136-75b42fbe97f8
	I1127 23:44:40.112750   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:40.113081   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:40.113091   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:40.113097   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:40.113103   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:40.114721   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:40.114740   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:40.114748   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:40 GMT
	I1127 23:44:40.114759   97564 round_trippers.go:580]     Audit-Id: 2cbaf774-d77a-41e6-845f-514d373e9f5e
	I1127 23:44:40.114767   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:40.114776   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:40.114788   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:40.114798   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:40.114914   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:40.115236   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:40.115247   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:40.115253   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:40.115259   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:40.116913   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:40.116931   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:40.116940   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:40.116959   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:40.116972   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:40.116988   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:40.116997   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:40 GMT
	I1127 23:44:40.117008   97564 round_trippers.go:580]     Audit-Id: 0d591849-1714-4732-867c-91f627deea07
	I1127 23:44:40.117109   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:40.617899   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:40.617923   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:40.617931   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:40.617937   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:40.620121   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:40.620147   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:40.620156   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:40.620162   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:40.620167   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:40.620172   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:40.620177   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:40 GMT
	I1127 23:44:40.620183   97564 round_trippers.go:580]     Audit-Id: 1fd84c15-7a43-4b96-96f2-b862a3decdcb
	I1127 23:44:40.620294   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:40.620670   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:40.620682   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:40.620689   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:40.620695   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:40.622790   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:40.622810   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:40.622819   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:40.622826   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:40.622834   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:40 GMT
	I1127 23:44:40.622843   97564 round_trippers.go:580]     Audit-Id: fd74df6e-984e-4066-808d-e51efa171267
	I1127 23:44:40.622853   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:40.622863   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:40.622976   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:41.117527   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:41.117551   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:41.117560   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:41.117566   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:41.120008   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:41.120036   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:41.120045   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:41.120053   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:41 GMT
	I1127 23:44:41.120062   97564 round_trippers.go:580]     Audit-Id: bf1f27ea-a847-45db-a787-0ac8ef9806bb
	I1127 23:44:41.120070   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:41.120079   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:41.120090   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:41.120196   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:41.120581   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:41.120594   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:41.120602   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:41.120607   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:41.122448   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:41.122469   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:41.122478   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:41.122486   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:41 GMT
	I1127 23:44:41.122494   97564 round_trippers.go:580]     Audit-Id: 30be7d3d-7c1c-4092-8861-b2958ec08291
	I1127 23:44:41.122503   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:41.122515   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:41.122528   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:41.122648   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:41.618254   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:41.618277   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:41.618285   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:41.618290   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:41.620455   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:41.620482   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:41.620491   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:41.620500   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:41.620508   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:41.620516   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:41 GMT
	I1127 23:44:41.620524   97564 round_trippers.go:580]     Audit-Id: f8920708-099c-4ee4-92f6-7b19e935a738
	I1127 23:44:41.620533   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:41.620671   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:41.621147   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:41.621163   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:41.621173   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:41.621181   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:41.622947   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:41.622967   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:41.622977   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:41 GMT
	I1127 23:44:41.622986   97564 round_trippers.go:580]     Audit-Id: 5e7dd203-cc81-42ab-bbd9-dff94224c1e8
	I1127 23:44:41.622995   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:41.623004   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:41.623016   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:41.623022   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:41.623195   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:42.117685   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:42.117717   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:42.117728   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:42.117737   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:42.120156   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:42.120181   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:42.120190   97564 round_trippers.go:580]     Audit-Id: 55da8568-88d3-4ae9-add9-bca619b5491a
	I1127 23:44:42.120197   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:42.120204   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:42.120213   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:42.120222   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:42.120232   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:42 GMT
	I1127 23:44:42.120375   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:42.120749   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:42.120760   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:42.120767   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:42.120776   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:42.123224   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:42.123246   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:42.123256   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:42.123263   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:42.123271   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:42.123279   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:42.123289   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:42 GMT
	I1127 23:44:42.123302   97564 round_trippers.go:580]     Audit-Id: e81d1a25-213b-4af7-8f19-f789e3ebdada
	I1127 23:44:42.123421   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:42.123739   97564 pod_ready.go:102] pod "etcd-multinode-595051" in "kube-system" namespace has status "Ready":"False"
	I1127 23:44:42.617965   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:42.617986   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:42.617994   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:42.618000   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:42.620091   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:42.620118   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:42.620130   97564 round_trippers.go:580]     Audit-Id: bd96f96a-fc42-4f39-bfef-4140695c31a1
	I1127 23:44:42.620139   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:42.620147   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:42.620156   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:42.620165   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:42.620258   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:42 GMT
	I1127 23:44:42.620389   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:42.620793   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:42.620805   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:42.620813   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:42.620821   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:42.622668   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:42.622685   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:42.622692   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:42 GMT
	I1127 23:44:42.622698   97564 round_trippers.go:580]     Audit-Id: bf2246b7-0d45-46e8-b1f0-6c3f2722ba58
	I1127 23:44:42.622704   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:42.622709   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:42.622714   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:42.622719   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:42.622860   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:43.117962   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:43.117983   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:43.117991   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:43.118001   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:43.120456   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:43.120482   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:43.120492   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:43.120501   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:43.120510   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:43 GMT
	I1127 23:44:43.120519   97564 round_trippers.go:580]     Audit-Id: 7e4844fc-35c0-4613-a4b1-95f867aa1bea
	I1127 23:44:43.120525   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:43.120536   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:43.120658   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:43.121211   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:43.121232   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:43.121243   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:43.121254   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:43.123249   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:43.123271   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:43.123278   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:43 GMT
	I1127 23:44:43.123283   97564 round_trippers.go:580]     Audit-Id: 28ad819f-845c-4f30-a01d-4e2bfb6f6dcd
	I1127 23:44:43.123289   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:43.123294   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:43.123300   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:43.123305   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:43.123513   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:43.618200   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:43.618225   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:43.618233   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:43.618239   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:43.620437   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:43.620460   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:43.620469   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:43.620476   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:43.620483   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:43.620490   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:43.620498   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:43 GMT
	I1127 23:44:43.620507   97564 round_trippers.go:580]     Audit-Id: c8c0fc69-3c8a-483c-b386-9430d3395b43
	I1127 23:44:43.620651   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:43.621033   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:43.621046   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:43.621053   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:43.621059   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:43.623143   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:43.623160   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:43.623167   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:43.623173   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:43.623179   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:43.623184   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:43.623189   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:43 GMT
	I1127 23:44:43.623194   97564 round_trippers.go:580]     Audit-Id: 6fc92fae-6fc7-43b7-badc-72f76b6ddfdb
	I1127 23:44:43.623326   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:44.117973   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:44.117997   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:44.118005   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:44.118011   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:44.120523   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:44.120550   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:44.120557   97564 round_trippers.go:580]     Audit-Id: 2e4e01f6-249a-4409-baac-819d5cdfaa0a
	I1127 23:44:44.120563   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:44.120568   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:44.120573   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:44.120578   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:44.120583   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:44 GMT
	I1127 23:44:44.120732   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:44.121134   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:44.121145   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:44.121153   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:44.121159   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:44.123141   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:44.123159   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:44.123166   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:44 GMT
	I1127 23:44:44.123171   97564 round_trippers.go:580]     Audit-Id: 81e1e40c-de28-43eb-ba2e-61b6a4fd84aa
	I1127 23:44:44.123176   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:44.123181   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:44.123186   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:44.123191   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:44.123349   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:44.618004   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:44.618029   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:44.618041   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:44.618049   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:44.620256   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:44.620281   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:44.620291   97564 round_trippers.go:580]     Audit-Id: ee664688-c404-4264-92e2-f11fa8d6904a
	I1127 23:44:44.620298   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:44.620305   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:44.620313   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:44.620322   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:44.620331   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:44 GMT
	I1127 23:44:44.620451   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"315","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I1127 23:44:44.620839   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:44.620854   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:44.620864   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:44.620876   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:44.622779   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:44.622798   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:44.622805   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:44.622811   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:44.622816   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:44.622821   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:44 GMT
	I1127 23:44:44.622826   97564 round_trippers.go:580]     Audit-Id: ae32662c-1720-4c17-b199-7026d963c9a9
	I1127 23:44:44.622832   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:44.622962   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:44.623242   97564 pod_ready.go:102] pod "etcd-multinode-595051" in "kube-system" namespace has status "Ready":"False"
	I1127 23:44:45.117513   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:44:45.117533   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.117541   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.117547   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.119764   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:45.119786   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.119793   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.119800   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.119808   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.119817   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.119825   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.119837   97564 round_trippers.go:580]     Audit-Id: f77d9b4e-1d3c-426f-a896-4ecc9ab5ba14
	I1127 23:44:45.119955   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"416","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1127 23:44:45.120462   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:45.120479   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.120489   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.120499   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.122765   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:45.122795   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.122806   97564 round_trippers.go:580]     Audit-Id: ab529d00-8c48-42d7-a958-de7baa0f64f8
	I1127 23:44:45.122815   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.122825   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.122838   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.122849   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.122860   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.123042   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:45.123381   97564 pod_ready.go:92] pod "etcd-multinode-595051" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:45.123399   97564 pod_ready.go:81] duration metric: took 5.014584305s waiting for pod "etcd-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.123412   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.123468   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-595051
	I1127 23:44:45.123476   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.123483   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.123489   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.125334   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:45.125354   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.125364   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.125374   97564 round_trippers.go:580]     Audit-Id: df26a141-63c9-49e0-a082-38e620db2f20
	I1127 23:44:45.125383   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.125390   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.125395   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.125401   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.125547   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-595051","namespace":"kube-system","uid":"111b6195-41a6-4248-9c66-4d3d88d8628d","resourceVersion":"418","creationTimestamp":"2023-11-27T23:44:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6fe9890b037e16c7bf188f651d40131d","kubernetes.io/config.mirror":"6fe9890b037e16c7bf188f651d40131d","kubernetes.io/config.seen":"2023-11-27T23:44:18.731154530Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1127 23:44:45.126001   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:45.126015   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.126023   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.126029   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.127711   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:45.127727   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.127734   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.127739   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.127744   97564 round_trippers.go:580]     Audit-Id: 10270dbd-359f-4142-8cbf-ccdb513605e5
	I1127 23:44:45.127749   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.127754   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.127759   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.127928   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:45.128199   97564 pod_ready.go:92] pod "kube-apiserver-multinode-595051" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:45.128213   97564 pod_ready.go:81] duration metric: took 4.794133ms waiting for pod "kube-apiserver-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.128221   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.128261   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-595051
	I1127 23:44:45.128269   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.128275   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.128282   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.129917   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:45.129932   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.129938   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.129944   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.129951   97564 round_trippers.go:580]     Audit-Id: 0e612771-da2f-497e-9474-605599098649
	I1127 23:44:45.129959   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.129966   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.129974   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.130171   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-595051","namespace":"kube-system","uid":"fe43d1dc-0983-4cc9-b07a-9a17a606bc82","resourceVersion":"415","creationTimestamp":"2023-11-27T23:44:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"502f83a658bafeb48b025970fae2234e","kubernetes.io/config.mirror":"502f83a658bafeb48b025970fae2234e","kubernetes.io/config.seen":"2023-11-27T23:44:24.605671216Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1127 23:44:45.130535   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:45.130548   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.130558   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.130570   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.132173   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:45.132188   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.132194   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.132201   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.132209   97564 round_trippers.go:580]     Audit-Id: 34f9c1ed-577f-4258-a4ee-9712dc78f761
	I1127 23:44:45.132218   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.132227   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.132237   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.132326   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:45.132716   97564 pod_ready.go:92] pod "kube-controller-manager-multinode-595051" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:45.132734   97564 pod_ready.go:81] duration metric: took 4.50517ms waiting for pod "kube-controller-manager-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.132748   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gjwvt" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.132809   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjwvt
	I1127 23:44:45.132819   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.132830   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.132843   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.134794   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:45.134830   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.134840   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.134848   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.134856   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.134866   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.134879   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.134888   97564 round_trippers.go:580]     Audit-Id: 41312391-67e8-4a71-935a-ae919eb5e4fa
	I1127 23:44:45.135022   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gjwvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b33c9c4-54cf-49e4-a97a-d782fa80c2d8","resourceVersion":"383","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1f3c9e31-edf0-467a-8ea0-336e61619a0e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f3c9e31-edf0-467a-8ea0-336e61619a0e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1127 23:44:45.135383   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:45.135401   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.135411   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.135420   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.137050   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:45.137070   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.137080   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.137089   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.137097   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.137109   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.137121   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.137132   97564 round_trippers.go:580]     Audit-Id: 13026ab1-fc03-4577-9ccb-56b499a1d1fd
	I1127 23:44:45.137364   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:45.137635   97564 pod_ready.go:92] pod "kube-proxy-gjwvt" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:45.137649   97564 pod_ready.go:81] duration metric: took 4.887534ms waiting for pod "kube-proxy-gjwvt" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.137657   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.284064   97564 request.go:629] Waited for 146.34284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-595051
	I1127 23:44:45.284148   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-595051
	I1127 23:44:45.284155   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.284167   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.284176   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.286733   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:45.286753   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.286760   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.286766   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.286771   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.286776   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.286781   97564 round_trippers.go:580]     Audit-Id: 04d6e079-799a-4136-91d8-70d32f27ebf1
	I1127 23:44:45.286786   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.286951   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-595051","namespace":"kube-system","uid":"661b7e59-8eb3-4e67-b3d6-7f2cd255b11d","resourceVersion":"417","creationTimestamp":"2023-11-27T23:44:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c87ccbe06dcbf99adfb998536f155f5a","kubernetes.io/config.mirror":"c87ccbe06dcbf99adfb998536f155f5a","kubernetes.io/config.seen":"2023-11-27T23:44:24.605675386Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1127 23:44:45.483704   97564 request.go:629] Waited for 196.344355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:45.483779   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:44:45.483790   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.483797   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.483811   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.486210   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:45.486232   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.486239   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.486244   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.486249   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.486262   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.486272   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.486283   97564 round_trippers.go:580]     Audit-Id: 3b2cd518-7eda-49d6-aa6b-0c7d7847a612
	I1127 23:44:45.486415   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:44:45.486834   97564 pod_ready.go:92] pod "kube-scheduler-multinode-595051" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:45.486854   97564 pod_ready.go:81] duration metric: took 349.189984ms waiting for pod "kube-scheduler-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:45.486888   97564 pod_ready.go:38] duration metric: took 6.400873033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:44:45.486907   97564 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:44:45.486967   97564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:44:45.496836   97564 command_runner.go:130] > 1440
	I1127 23:44:45.497538   97564 api_server.go:72] duration metric: took 8.514017358s to wait for apiserver process to appear ...
	I1127 23:44:45.497556   97564 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:44:45.497577   97564 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1127 23:44:45.501852   97564 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1127 23:44:45.501919   97564 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1127 23:44:45.501930   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.501942   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.501955   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.502981   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:45.503001   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.503011   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.503019   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.503028   97564 round_trippers.go:580]     Content-Length: 264
	I1127 23:44:45.503037   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.503053   97564 round_trippers.go:580]     Audit-Id: bc7bb80e-e6ac-450d-aa4a-2c31933a23cc
	I1127 23:44:45.503062   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.503075   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.503097   97564 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1127 23:44:45.503205   97564 api_server.go:141] control plane version: v1.28.4
	I1127 23:44:45.503230   97564 api_server.go:131] duration metric: took 5.662828ms to wait for apiserver health ...
	I1127 23:44:45.503241   97564 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:44:45.683657   97564 request.go:629] Waited for 180.35084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:44:45.683728   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:44:45.683733   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.683741   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.683747   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.686949   97564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:44:45.686973   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.686981   97564 round_trippers.go:580]     Audit-Id: 8bead47d-cad2-4a9e-906e-640c7a4558b2
	I1127 23:44:45.686991   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.686996   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.687002   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.687008   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.687013   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.687433   97564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"397","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1127 23:44:45.689136   97564 system_pods.go:59] 8 kube-system pods found
	I1127 23:44:45.689163   97564 system_pods.go:61] "coredns-5dd5756b68-px5k6" [31070a53-8a76-42ef-ba74-254dc4e13178] Running
	I1127 23:44:45.689168   97564 system_pods.go:61] "etcd-multinode-595051" [c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7] Running
	I1127 23:44:45.689172   97564 system_pods.go:61] "kindnet-2hchr" [30497a11-9440-4749-bf2b-d01df4f4b9b9] Running
	I1127 23:44:45.689176   97564 system_pods.go:61] "kube-apiserver-multinode-595051" [111b6195-41a6-4248-9c66-4d3d88d8628d] Running
	I1127 23:44:45.689180   97564 system_pods.go:61] "kube-controller-manager-multinode-595051" [fe43d1dc-0983-4cc9-b07a-9a17a606bc82] Running
	I1127 23:44:45.689184   97564 system_pods.go:61] "kube-proxy-gjwvt" [9b33c9c4-54cf-49e4-a97a-d782fa80c2d8] Running
	I1127 23:44:45.689188   97564 system_pods.go:61] "kube-scheduler-multinode-595051" [661b7e59-8eb3-4e67-b3d6-7f2cd255b11d] Running
	I1127 23:44:45.689191   97564 system_pods.go:61] "storage-provisioner" [4321beea-2377-49ee-947f-88a6473310ea] Running
	I1127 23:44:45.689197   97564 system_pods.go:74] duration metric: took 185.951137ms to wait for pod list to return data ...
	I1127 23:44:45.689206   97564 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:44:45.883622   97564 request.go:629] Waited for 194.351082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:44:45.883690   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:44:45.883697   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:45.883709   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:45.883722   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:45.886264   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:45.886288   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:45.886298   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:45 GMT
	I1127 23:44:45.886306   97564 round_trippers.go:580]     Audit-Id: 84fe9904-de6a-48d7-9103-da135053ccc3
	I1127 23:44:45.886313   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:45.886322   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:45.886333   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:45.886343   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:45.886372   97564 round_trippers.go:580]     Content-Length: 261
	I1127 23:44:45.886404   97564 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a420784b-cb1e-4fde-bf34-0055d54a8737","resourceVersion":"299","creationTimestamp":"2023-11-27T23:44:36Z"}}]}
	I1127 23:44:45.886610   97564 default_sa.go:45] found service account: "default"
	I1127 23:44:45.886629   97564 default_sa.go:55] duration metric: took 197.413389ms for default service account to be created ...
	I1127 23:44:45.886640   97564 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:44:46.084107   97564 request.go:629] Waited for 197.397351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:44:46.084173   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:44:46.084179   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:46.084197   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:46.084204   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:46.087759   97564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:44:46.087785   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:46.087793   97564 round_trippers.go:580]     Audit-Id: fd608344-d670-4c9c-98f3-e3816f2c319d
	I1127 23:44:46.087799   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:46.087804   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:46.087810   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:46.087816   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:46.087825   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:46 GMT
	I1127 23:44:46.088188   97564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"397","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1127 23:44:46.089883   97564 system_pods.go:86] 8 kube-system pods found
	I1127 23:44:46.089907   97564 system_pods.go:89] "coredns-5dd5756b68-px5k6" [31070a53-8a76-42ef-ba74-254dc4e13178] Running
	I1127 23:44:46.089912   97564 system_pods.go:89] "etcd-multinode-595051" [c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7] Running
	I1127 23:44:46.089917   97564 system_pods.go:89] "kindnet-2hchr" [30497a11-9440-4749-bf2b-d01df4f4b9b9] Running
	I1127 23:44:46.089924   97564 system_pods.go:89] "kube-apiserver-multinode-595051" [111b6195-41a6-4248-9c66-4d3d88d8628d] Running
	I1127 23:44:46.089932   97564 system_pods.go:89] "kube-controller-manager-multinode-595051" [fe43d1dc-0983-4cc9-b07a-9a17a606bc82] Running
	I1127 23:44:46.089943   97564 system_pods.go:89] "kube-proxy-gjwvt" [9b33c9c4-54cf-49e4-a97a-d782fa80c2d8] Running
	I1127 23:44:46.089950   97564 system_pods.go:89] "kube-scheduler-multinode-595051" [661b7e59-8eb3-4e67-b3d6-7f2cd255b11d] Running
	I1127 23:44:46.089965   97564 system_pods.go:89] "storage-provisioner" [4321beea-2377-49ee-947f-88a6473310ea] Running
	I1127 23:44:46.089971   97564 system_pods.go:126] duration metric: took 203.325858ms to wait for k8s-apps to be running ...
	I1127 23:44:46.089977   97564 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:44:46.090032   97564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:44:46.101309   97564 system_svc.go:56] duration metric: took 11.324763ms WaitForService to wait for kubelet.
	I1127 23:44:46.101329   97564 kubeadm.go:581] duration metric: took 9.117811607s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:44:46.101349   97564 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:44:46.283796   97564 request.go:629] Waited for 182.37201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1127 23:44:46.283866   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1127 23:44:46.283872   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:46.283882   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:46.283892   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:46.286468   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:46.286495   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:46.286505   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:46.286513   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:46.286522   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:46.286531   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:46.286544   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:46 GMT
	I1127 23:44:46.286553   97564 round_trippers.go:580]     Audit-Id: 38933286-4567-43e4-aed9-d1e624854cd7
	I1127 23:44:46.286658   97564 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1127 23:44:46.287037   97564 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 23:44:46.287057   97564 node_conditions.go:123] node cpu capacity is 8
	I1127 23:44:46.287068   97564 node_conditions.go:105] duration metric: took 185.714642ms to run NodePressure ...
	I1127 23:44:46.287078   97564 start.go:228] waiting for startup goroutines ...
	I1127 23:44:46.287083   97564 start.go:233] waiting for cluster config update ...
	I1127 23:44:46.287092   97564 start.go:242] writing updated cluster config ...
	I1127 23:44:46.289524   97564 out.go:177] 
	I1127 23:44:46.291146   97564 config.go:182] Loaded profile config "multinode-595051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:44:46.291211   97564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/config.json ...
	I1127 23:44:46.293108   97564 out.go:177] * Starting worker node multinode-595051-m02 in cluster multinode-595051
	I1127 23:44:46.294658   97564 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:44:46.296207   97564 out.go:177] * Pulling base image ...
	I1127 23:44:46.297836   97564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:44:46.297861   97564 cache.go:56] Caching tarball of preloaded images
	I1127 23:44:46.297962   97564 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:44:46.297994   97564 preload.go:174] Found /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 23:44:46.298007   97564 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:44:46.298125   97564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/config.json ...
	I1127 23:44:46.314791   97564 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:44:46.314813   97564 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 23:44:46.314831   97564 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:44:46.314863   97564 start.go:365] acquiring machines lock for multinode-595051-m02: {Name:mk96feb834b56ff0ea0e80adbe67978fca7d2d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:44:46.314970   97564 start.go:369] acquired machines lock for "multinode-595051-m02" in 81.915µs
	I1127 23:44:46.315000   97564 start.go:93] Provisioning new machine with config: &{Name:multinode-595051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-595051 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:44:46.315079   97564 start.go:125] createHost starting for "m02" (driver="docker")
	I1127 23:44:46.317474   97564 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1127 23:44:46.317577   97564 start.go:159] libmachine.API.Create for "multinode-595051" (driver="docker")
	I1127 23:44:46.317598   97564 client.go:168] LocalClient.Create starting
	I1127 23:44:46.317679   97564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem
	I1127 23:44:46.317723   97564 main.go:141] libmachine: Decoding PEM data...
	I1127 23:44:46.317741   97564 main.go:141] libmachine: Parsing certificate...
	I1127 23:44:46.317804   97564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem
	I1127 23:44:46.317833   97564 main.go:141] libmachine: Decoding PEM data...
	I1127 23:44:46.317843   97564 main.go:141] libmachine: Parsing certificate...
	I1127 23:44:46.318034   97564 cli_runner.go:164] Run: docker network inspect multinode-595051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:44:46.333521   97564 network_create.go:77] Found existing network {name:multinode-595051 subnet:0xc00297b6b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1127 23:44:46.333591   97564 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-595051-m02" container
	I1127 23:44:46.333653   97564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:44:46.350234   97564 cli_runner.go:164] Run: docker volume create multinode-595051-m02 --label name.minikube.sigs.k8s.io=multinode-595051-m02 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:44:46.366718   97564 oci.go:103] Successfully created a docker volume multinode-595051-m02
	I1127 23:44:46.366790   97564 cli_runner.go:164] Run: docker run --rm --name multinode-595051-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-595051-m02 --entrypoint /usr/bin/test -v multinode-595051-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:44:46.890130   97564 oci.go:107] Successfully prepared a docker volume multinode-595051-m02
	I1127 23:44:46.890166   97564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:44:46.890191   97564 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:44:46.890264   97564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-595051-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:44:51.962942   97564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-595051-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.072639734s)
	I1127 23:44:51.962981   97564 kic.go:203] duration metric: took 5.072786 seconds to extract preloaded images to volume
	W1127 23:44:51.963127   97564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:44:51.963248   97564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:44:52.014750   97564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-595051-m02 --name multinode-595051-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-595051-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-595051-m02 --network multinode-595051 --ip 192.168.58.3 --volume multinode-595051-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:44:52.341495   97564 cli_runner.go:164] Run: docker container inspect multinode-595051-m02 --format={{.State.Running}}
	I1127 23:44:52.359970   97564 cli_runner.go:164] Run: docker container inspect multinode-595051-m02 --format={{.State.Status}}
	I1127 23:44:52.376957   97564 cli_runner.go:164] Run: docker exec multinode-595051-m02 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:44:52.435631   97564 oci.go:144] the created container "multinode-595051-m02" has a running status.
	I1127 23:44:52.435664   97564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa...
	I1127 23:44:52.630671   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 23:44:52.630717   97564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:44:52.649991   97564 cli_runner.go:164] Run: docker container inspect multinode-595051-m02 --format={{.State.Status}}
	I1127 23:44:52.669422   97564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:44:52.669448   97564 kic_runner.go:114] Args: [docker exec --privileged multinode-595051-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:44:52.734266   97564 cli_runner.go:164] Run: docker container inspect multinode-595051-m02 --format={{.State.Status}}
	I1127 23:44:52.755596   97564 machine.go:88] provisioning docker machine ...
	I1127 23:44:52.755640   97564 ubuntu.go:169] provisioning hostname "multinode-595051-m02"
	I1127 23:44:52.755707   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:44:52.778800   97564 main.go:141] libmachine: Using SSH client type: native
	I1127 23:44:52.779277   97564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1127 23:44:52.779295   97564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-595051-m02 && echo "multinode-595051-m02" | sudo tee /etc/hostname
	I1127 23:44:53.022460   97564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-595051-m02
	
	I1127 23:44:53.022541   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:44:53.039543   97564 main.go:141] libmachine: Using SSH client type: native
	I1127 23:44:53.039880   97564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1127 23:44:53.039900   97564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-595051-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-595051-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-595051-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:44:53.170147   97564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:44:53.170189   97564 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4554/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4554/.minikube}
	I1127 23:44:53.170213   97564 ubuntu.go:177] setting up certificates
	I1127 23:44:53.170231   97564 provision.go:83] configureAuth start
	I1127 23:44:53.170295   97564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-595051-m02
	I1127 23:44:53.186462   97564 provision.go:138] copyHostCerts
	I1127 23:44:53.186506   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem
	I1127 23:44:53.186538   97564 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem, removing ...
	I1127 23:44:53.186547   97564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem
	I1127 23:44:53.186613   97564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem (1123 bytes)
	I1127 23:44:53.186684   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem
	I1127 23:44:53.186701   97564 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem, removing ...
	I1127 23:44:53.186707   97564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem
	I1127 23:44:53.186730   97564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem (1679 bytes)
	I1127 23:44:53.186772   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem
	I1127 23:44:53.186787   97564 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem, removing ...
	I1127 23:44:53.186795   97564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem
	I1127 23:44:53.186817   97564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem (1078 bytes)
	I1127 23:44:53.186871   97564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem org=jenkins.multinode-595051-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-595051-m02]
	I1127 23:44:53.347117   97564 provision.go:172] copyRemoteCerts
	I1127 23:44:53.347179   97564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:44:53.347212   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:44:53.363558   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa Username:docker}
	I1127 23:44:53.458357   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:44:53.458411   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1127 23:44:53.479711   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:44:53.479777   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:44:53.500764   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:44:53.500827   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:44:53.521746   97564 provision.go:86] duration metric: configureAuth took 351.500627ms
	I1127 23:44:53.521777   97564 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:44:53.521959   97564 config.go:182] Loaded profile config "multinode-595051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:44:53.522158   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:44:53.538108   97564 main.go:141] libmachine: Using SSH client type: native
	I1127 23:44:53.538426   97564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1127 23:44:53.538443   97564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:44:53.745686   97564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:44:53.745712   97564 machine.go:91] provisioned docker machine in 990.090902ms
	I1127 23:44:53.745722   97564 client.go:171] LocalClient.Create took 7.428115879s
	I1127 23:44:53.745740   97564 start.go:167] duration metric: libmachine.API.Create for "multinode-595051" took 7.428162682s
	I1127 23:44:53.745747   97564 start.go:300] post-start starting for "multinode-595051-m02" (driver="docker")
	I1127 23:44:53.745756   97564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:44:53.745820   97564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:44:53.745869   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:44:53.762481   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa Username:docker}
	I1127 23:44:53.850945   97564 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:44:53.853914   97564 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1127 23:44:53.853932   97564 command_runner.go:130] > NAME="Ubuntu"
	I1127 23:44:53.853941   97564 command_runner.go:130] > VERSION_ID="22.04"
	I1127 23:44:53.853947   97564 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1127 23:44:53.853952   97564 command_runner.go:130] > VERSION_CODENAME=jammy
	I1127 23:44:53.853956   97564 command_runner.go:130] > ID=ubuntu
	I1127 23:44:53.853961   97564 command_runner.go:130] > ID_LIKE=debian
	I1127 23:44:53.853966   97564 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1127 23:44:53.853971   97564 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1127 23:44:53.853978   97564 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1127 23:44:53.853987   97564 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1127 23:44:53.853991   97564 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1127 23:44:53.854036   97564 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:44:53.854085   97564 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:44:53.854098   97564 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:44:53.854109   97564 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:44:53.854119   97564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/addons for local assets ...
	I1127 23:44:53.854167   97564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/files for local assets ...
	I1127 23:44:53.854236   97564 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> 113062.pem in /etc/ssl/certs
	I1127 23:44:53.854245   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> /etc/ssl/certs/113062.pem
	I1127 23:44:53.854318   97564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:44:53.862303   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem --> /etc/ssl/certs/113062.pem (1708 bytes)
	I1127 23:44:53.884297   97564 start.go:303] post-start completed in 138.538441ms
	I1127 23:44:53.884626   97564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-595051-m02
	I1127 23:44:53.900847   97564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/config.json ...
	I1127 23:44:53.901106   97564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:44:53.901146   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:44:53.918032   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa Username:docker}
	I1127 23:44:54.002776   97564 command_runner.go:130] > 21%!
	(MISSING)I1127 23:44:54.002846   97564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:44:54.007032   97564 command_runner.go:130] > 233G
	I1127 23:44:54.007068   97564 start.go:128] duration metric: createHost completed in 7.691976931s
	I1127 23:44:54.007082   97564 start.go:83] releasing machines lock for "multinode-595051-m02", held for 7.692097639s
	I1127 23:44:54.007150   97564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-595051-m02
	I1127 23:44:54.025301   97564 out.go:177] * Found network options:
	I1127 23:44:54.026863   97564 out.go:177]   - NO_PROXY=192.168.58.2
	W1127 23:44:54.028326   97564 proxy.go:119] fail to check proxy env: Error ip not in block
	W1127 23:44:54.028369   97564 proxy.go:119] fail to check proxy env: Error ip not in block
	I1127 23:44:54.028453   97564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:44:54.028510   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:44:54.028544   97564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:44:54.028623   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:44:54.045482   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa Username:docker}
	I1127 23:44:54.045476   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa Username:docker}
	I1127 23:44:54.263149   97564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1127 23:44:54.263295   97564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:44:54.267106   97564 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1127 23:44:54.267126   97564 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1127 23:44:54.267133   97564 command_runner.go:130] > Device: b0h/176d	Inode: 541438      Links: 1
	I1127 23:44:54.267139   97564 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:44:54.267145   97564 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1127 23:44:54.267150   97564 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1127 23:44:54.267154   97564 command_runner.go:130] > Change: 2023-11-27 23:25:11.088260507 +0000
	I1127 23:44:54.267159   97564 command_runner.go:130] >  Birth: 2023-11-27 23:25:11.088260507 +0000
	I1127 23:44:54.267320   97564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:44:54.284120   97564 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:44:54.284189   97564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:44:54.310293   97564 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1127 23:44:54.310368   97564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:44:54.310384   97564 start.go:472] detecting cgroup driver to use...
	I1127 23:44:54.310418   97564 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:44:54.310460   97564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:44:54.324187   97564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:44:54.334459   97564 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:44:54.334509   97564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:44:54.346187   97564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:44:54.358760   97564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:44:54.432151   97564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:44:54.509432   97564 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1127 23:44:54.509478   97564 docker.go:219] disabling docker service ...
	I1127 23:44:54.509540   97564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:44:54.528448   97564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:44:54.539361   97564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:44:54.613128   97564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1127 23:44:54.613210   97564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:44:54.695939   97564 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1127 23:44:54.696023   97564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:44:54.706207   97564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:44:54.719660   97564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1127 23:44:54.720456   97564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:44:54.720516   97564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:44:54.729332   97564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:44:54.729401   97564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:44:54.737781   97564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:44:54.746577   97564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:44:54.756312   97564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:44:54.764924   97564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:44:54.772345   97564 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1127 23:44:54.772408   97564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:44:54.780009   97564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:44:54.849741   97564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:44:54.958951   97564 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:44:54.959025   97564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:44:54.962310   97564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1127 23:44:54.962346   97564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1127 23:44:54.962356   97564 command_runner.go:130] > Device: bah/186d	Inode: 190         Links: 1
	I1127 23:44:54.962363   97564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:44:54.962374   97564 command_runner.go:130] > Access: 2023-11-27 23:44:54.944904785 +0000
	I1127 23:44:54.962386   97564 command_runner.go:130] > Modify: 2023-11-27 23:44:54.944904785 +0000
	I1127 23:44:54.962396   97564 command_runner.go:130] > Change: 2023-11-27 23:44:54.944904785 +0000
	I1127 23:44:54.962404   97564 command_runner.go:130] >  Birth: -
	I1127 23:44:54.962434   97564 start.go:540] Will wait 60s for crictl version
	I1127 23:44:54.962475   97564 ssh_runner.go:195] Run: which crictl
	I1127 23:44:54.965523   97564 command_runner.go:130] > /usr/bin/crictl
	I1127 23:44:54.965594   97564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:44:54.995164   97564 command_runner.go:130] > Version:  0.1.0
	I1127 23:44:54.995200   97564 command_runner.go:130] > RuntimeName:  cri-o
	I1127 23:44:54.995205   97564 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1127 23:44:54.995210   97564 command_runner.go:130] > RuntimeApiVersion:  v1
	I1127 23:44:54.997018   97564 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 23:44:54.997096   97564 ssh_runner.go:195] Run: crio --version
	I1127 23:44:55.029021   97564 command_runner.go:130] > crio version 1.24.6
	I1127 23:44:55.029041   97564 command_runner.go:130] > Version:          1.24.6
	I1127 23:44:55.029048   97564 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 23:44:55.029053   97564 command_runner.go:130] > GitTreeState:     clean
	I1127 23:44:55.029067   97564 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 23:44:55.029074   97564 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 23:44:55.029080   97564 command_runner.go:130] > Compiler:         gc
	I1127 23:44:55.029090   97564 command_runner.go:130] > Platform:         linux/amd64
	I1127 23:44:55.029102   97564 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:44:55.029114   97564 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:44:55.029119   97564 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:44:55.029125   97564 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:44:55.029195   97564 ssh_runner.go:195] Run: crio --version
	I1127 23:44:55.059618   97564 command_runner.go:130] > crio version 1.24.6
	I1127 23:44:55.059644   97564 command_runner.go:130] > Version:          1.24.6
	I1127 23:44:55.059655   97564 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 23:44:55.059662   97564 command_runner.go:130] > GitTreeState:     clean
	I1127 23:44:55.059672   97564 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 23:44:55.059683   97564 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 23:44:55.059693   97564 command_runner.go:130] > Compiler:         gc
	I1127 23:44:55.059701   97564 command_runner.go:130] > Platform:         linux/amd64
	I1127 23:44:55.059713   97564 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:44:55.059730   97564 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:44:55.059741   97564 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:44:55.059748   97564 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:44:55.062965   97564 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 23:44:55.064580   97564 out.go:177]   - env NO_PROXY=192.168.58.2
	I1127 23:44:55.066098   97564 cli_runner.go:164] Run: docker network inspect multinode-595051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:44:55.082948   97564 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1127 23:44:55.086663   97564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:44:55.096526   97564 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051 for IP: 192.168.58.3
	I1127 23:44:55.096561   97564 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a5db8f506dfbef3cb84c722632fd59c37603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:44:55.096696   97564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key
	I1127 23:44:55.096739   97564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key
	I1127 23:44:55.096753   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:44:55.096772   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:44:55.096786   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:44:55.096801   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:44:55.096871   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem (1338 bytes)
	W1127 23:44:55.096901   97564 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306_empty.pem, impossibly tiny 0 bytes
	I1127 23:44:55.096912   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:44:55.096939   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:44:55.097001   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:44:55.097032   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem (1679 bytes)
	I1127 23:44:55.097082   97564 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem (1708 bytes)
	I1127 23:44:55.097120   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:55.097139   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem -> /usr/share/ca-certificates/11306.pem
	I1127 23:44:55.097153   97564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> /usr/share/ca-certificates/113062.pem
	I1127 23:44:55.097521   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:44:55.117912   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 23:44:55.138458   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:44:55.159151   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 23:44:55.180640   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:44:55.201554   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/11306.pem --> /usr/share/ca-certificates/11306.pem (1338 bytes)
	I1127 23:44:55.222648   97564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem --> /usr/share/ca-certificates/113062.pem (1708 bytes)
	I1127 23:44:55.243446   97564 ssh_runner.go:195] Run: openssl version
	I1127 23:44:55.248353   97564 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1127 23:44:55.248435   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:44:55.256721   97564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:55.259993   97564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:25 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:55.260037   97564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:25 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:55.260067   97564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:44:55.266003   97564 command_runner.go:130] > b5213941
	I1127 23:44:55.266134   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:44:55.274423   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11306.pem && ln -fs /usr/share/ca-certificates/11306.pem /etc/ssl/certs/11306.pem"
	I1127 23:44:55.282842   97564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11306.pem
	I1127 23:44:55.286022   97564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:31 /usr/share/ca-certificates/11306.pem
	I1127 23:44:55.286064   97564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:31 /usr/share/ca-certificates/11306.pem
	I1127 23:44:55.286101   97564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11306.pem
	I1127 23:44:55.292173   97564 command_runner.go:130] > 51391683
	I1127 23:44:55.292296   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11306.pem /etc/ssl/certs/51391683.0"
	I1127 23:44:55.300434   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113062.pem && ln -fs /usr/share/ca-certificates/113062.pem /etc/ssl/certs/113062.pem"
	I1127 23:44:55.308469   97564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113062.pem
	I1127 23:44:55.311539   97564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:31 /usr/share/ca-certificates/113062.pem
	I1127 23:44:55.311583   97564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:31 /usr/share/ca-certificates/113062.pem
	I1127 23:44:55.311620   97564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113062.pem
	I1127 23:44:55.317680   97564 command_runner.go:130] > 3ec20f2e
	I1127 23:44:55.317741   97564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/113062.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:44:55.325931   97564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:44:55.329228   97564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:44:55.329293   97564 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:44:55.329401   97564 ssh_runner.go:195] Run: crio config
	I1127 23:44:55.364088   97564 command_runner.go:130] ! time="2023-11-27 23:44:55.363652575Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1127 23:44:55.364124   97564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1127 23:44:55.369570   97564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1127 23:44:55.369592   97564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1127 23:44:55.369603   97564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1127 23:44:55.369606   97564 command_runner.go:130] > #
	I1127 23:44:55.369613   97564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1127 23:44:55.369619   97564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1127 23:44:55.369625   97564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1127 23:44:55.369637   97564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1127 23:44:55.369643   97564 command_runner.go:130] > # reload'.
	I1127 23:44:55.369649   97564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1127 23:44:55.369658   97564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1127 23:44:55.369666   97564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1127 23:44:55.369674   97564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1127 23:44:55.369680   97564 command_runner.go:130] > [crio]
	I1127 23:44:55.369686   97564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1127 23:44:55.369694   97564 command_runner.go:130] > # containers images, in this directory.
	I1127 23:44:55.369703   97564 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1127 23:44:55.369712   97564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1127 23:44:55.369719   97564 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1127 23:44:55.369726   97564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1127 23:44:55.369734   97564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1127 23:44:55.369741   97564 command_runner.go:130] > # storage_driver = "vfs"
	I1127 23:44:55.369747   97564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1127 23:44:55.369755   97564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1127 23:44:55.369761   97564 command_runner.go:130] > # storage_option = [
	I1127 23:44:55.369765   97564 command_runner.go:130] > # ]
	I1127 23:44:55.369773   97564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1127 23:44:55.369782   97564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1127 23:44:55.369786   97564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1127 23:44:55.369794   97564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1127 23:44:55.369800   97564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1127 23:44:55.369812   97564 command_runner.go:130] > # always happen on a node reboot
	I1127 23:44:55.369819   97564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1127 23:44:55.369825   97564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1127 23:44:55.369833   97564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1127 23:44:55.369845   97564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1127 23:44:55.369852   97564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1127 23:44:55.369860   97564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1127 23:44:55.369874   97564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1127 23:44:55.369881   97564 command_runner.go:130] > # internal_wipe = true
	I1127 23:44:55.369887   97564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1127 23:44:55.369895   97564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1127 23:44:55.369904   97564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1127 23:44:55.369911   97564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1127 23:44:55.369918   97564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1127 23:44:55.369924   97564 command_runner.go:130] > [crio.api]
	I1127 23:44:55.369929   97564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1127 23:44:55.369936   97564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1127 23:44:55.369942   97564 command_runner.go:130] > # IP address on which the stream server will listen.
	I1127 23:44:55.369948   97564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1127 23:44:55.369955   97564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1127 23:44:55.369962   97564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1127 23:44:55.369967   97564 command_runner.go:130] > # stream_port = "0"
	I1127 23:44:55.369974   97564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1127 23:44:55.369978   97564 command_runner.go:130] > # stream_enable_tls = false
	I1127 23:44:55.369986   97564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1127 23:44:55.369993   97564 command_runner.go:130] > # stream_idle_timeout = ""
	I1127 23:44:55.369999   97564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1127 23:44:55.370007   97564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1127 23:44:55.370015   97564 command_runner.go:130] > # minutes.
	I1127 23:44:55.370022   97564 command_runner.go:130] > # stream_tls_cert = ""
	I1127 23:44:55.370028   97564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1127 23:44:55.370036   97564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1127 23:44:55.370042   97564 command_runner.go:130] > # stream_tls_key = ""
	I1127 23:44:55.370048   97564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1127 23:44:55.370077   97564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1127 23:44:55.370089   97564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1127 23:44:55.370097   97564 command_runner.go:130] > # stream_tls_ca = ""
	I1127 23:44:55.370106   97564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:44:55.370113   97564 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1127 23:44:55.370120   97564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:44:55.370127   97564 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1127 23:44:55.370148   97564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1127 23:44:55.370157   97564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1127 23:44:55.370161   97564 command_runner.go:130] > [crio.runtime]
	I1127 23:44:55.370166   97564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1127 23:44:55.370172   97564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1127 23:44:55.370179   97564 command_runner.go:130] > # "nofile=1024:2048"
	I1127 23:44:55.370185   97564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1127 23:44:55.370191   97564 command_runner.go:130] > # default_ulimits = [
	I1127 23:44:55.370195   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370203   97564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1127 23:44:55.370208   97564 command_runner.go:130] > # no_pivot = false
	I1127 23:44:55.370213   97564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1127 23:44:55.370222   97564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1127 23:44:55.370229   97564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1127 23:44:55.370235   97564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1127 23:44:55.370242   97564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1127 23:44:55.370249   97564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:44:55.370253   97564 command_runner.go:130] > # conmon = ""
	I1127 23:44:55.370258   97564 command_runner.go:130] > # Cgroup setting for conmon
	I1127 23:44:55.370266   97564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1127 23:44:55.370273   97564 command_runner.go:130] > conmon_cgroup = "pod"
	I1127 23:44:55.370279   97564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1127 23:44:55.370290   97564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1127 23:44:55.370299   97564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:44:55.370305   97564 command_runner.go:130] > # conmon_env = [
	I1127 23:44:55.370309   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370316   97564 command_runner.go:130] > # Additional environment variables to set for all the
	I1127 23:44:55.370324   97564 command_runner.go:130] > # containers. These are overridden if set in the
	I1127 23:44:55.370331   97564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1127 23:44:55.370335   97564 command_runner.go:130] > # default_env = [
	I1127 23:44:55.370341   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370347   97564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1127 23:44:55.370354   97564 command_runner.go:130] > # selinux = false
	I1127 23:44:55.370360   97564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1127 23:44:55.370371   97564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1127 23:44:55.370384   97564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1127 23:44:55.370443   97564 command_runner.go:130] > # seccomp_profile = ""
	I1127 23:44:55.370473   97564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1127 23:44:55.370482   97564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1127 23:44:55.370490   97564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1127 23:44:55.370497   97564 command_runner.go:130] > # which might increase security.
	I1127 23:44:55.370502   97564 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1127 23:44:55.370511   97564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1127 23:44:55.370520   97564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1127 23:44:55.370528   97564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1127 23:44:55.370536   97564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1127 23:44:55.370544   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:55.370550   97564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1127 23:44:55.370556   97564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1127 23:44:55.370563   97564 command_runner.go:130] > # the cgroup blockio controller.
	I1127 23:44:55.370567   97564 command_runner.go:130] > # blockio_config_file = ""
	I1127 23:44:55.370576   97564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1127 23:44:55.370582   97564 command_runner.go:130] > # irqbalance daemon.
	I1127 23:44:55.370587   97564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1127 23:44:55.370596   97564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1127 23:44:55.370603   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:55.370607   97564 command_runner.go:130] > # rdt_config_file = ""
	I1127 23:44:55.370616   97564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1127 23:44:55.370621   97564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1127 23:44:55.370629   97564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1127 23:44:55.370634   97564 command_runner.go:130] > # separate_pull_cgroup = ""
	I1127 23:44:55.370640   97564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1127 23:44:55.370649   97564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1127 23:44:55.370655   97564 command_runner.go:130] > # will be added.
	I1127 23:44:55.370660   97564 command_runner.go:130] > # default_capabilities = [
	I1127 23:44:55.370665   97564 command_runner.go:130] > # 	"CHOWN",
	I1127 23:44:55.370669   97564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1127 23:44:55.370678   97564 command_runner.go:130] > # 	"FSETID",
	I1127 23:44:55.370685   97564 command_runner.go:130] > # 	"FOWNER",
	I1127 23:44:55.370689   97564 command_runner.go:130] > # 	"SETGID",
	I1127 23:44:55.370695   97564 command_runner.go:130] > # 	"SETUID",
	I1127 23:44:55.370699   97564 command_runner.go:130] > # 	"SETPCAP",
	I1127 23:44:55.370706   97564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1127 23:44:55.370710   97564 command_runner.go:130] > # 	"KILL",
	I1127 23:44:55.370715   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370723   97564 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1127 23:44:55.370732   97564 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1127 23:44:55.370739   97564 command_runner.go:130] > # add_inheritable_capabilities = true
	I1127 23:44:55.370745   97564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1127 23:44:55.370753   97564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:44:55.370759   97564 command_runner.go:130] > # default_sysctls = [
	I1127 23:44:55.370763   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370770   97564 command_runner.go:130] > # List of devices on the host that a
	I1127 23:44:55.370776   97564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1127 23:44:55.370782   97564 command_runner.go:130] > # allowed_devices = [
	I1127 23:44:55.370792   97564 command_runner.go:130] > # 	"/dev/fuse",
	I1127 23:44:55.370798   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370803   97564 command_runner.go:130] > # List of additional devices. specified as
	I1127 23:44:55.370837   97564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1127 23:44:55.370845   97564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1127 23:44:55.370853   97564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:44:55.370858   97564 command_runner.go:130] > # additional_devices = [
	I1127 23:44:55.370862   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370869   97564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1127 23:44:55.370875   97564 command_runner.go:130] > # cdi_spec_dirs = [
	I1127 23:44:55.370880   97564 command_runner.go:130] > # 	"/etc/cdi",
	I1127 23:44:55.370886   97564 command_runner.go:130] > # 	"/var/run/cdi",
	I1127 23:44:55.370889   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370895   97564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1127 23:44:55.370903   97564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1127 23:44:55.370909   97564 command_runner.go:130] > # Defaults to false.
	I1127 23:44:55.370914   97564 command_runner.go:130] > # device_ownership_from_security_context = false
	I1127 23:44:55.370926   97564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1127 23:44:55.370935   97564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1127 23:44:55.370942   97564 command_runner.go:130] > # hooks_dir = [
	I1127 23:44:55.370947   97564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1127 23:44:55.370959   97564 command_runner.go:130] > # ]
	I1127 23:44:55.370967   97564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1127 23:44:55.370975   97564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1127 23:44:55.370983   97564 command_runner.go:130] > # its default mounts from the following two files:
	I1127 23:44:55.370989   97564 command_runner.go:130] > #
	I1127 23:44:55.370995   97564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1127 23:44:55.371004   97564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1127 23:44:55.371012   97564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1127 23:44:55.371017   97564 command_runner.go:130] > #
	I1127 23:44:55.371026   97564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1127 23:44:55.371034   97564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1127 23:44:55.371040   97564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1127 23:44:55.371048   97564 command_runner.go:130] > #      only add mounts it finds in this file.
	I1127 23:44:55.371054   97564 command_runner.go:130] > #
	I1127 23:44:55.371059   97564 command_runner.go:130] > # default_mounts_file = ""
	I1127 23:44:55.371069   97564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1127 23:44:55.371077   97564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1127 23:44:55.371083   97564 command_runner.go:130] > # pids_limit = 0
	I1127 23:44:55.371089   97564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1127 23:44:55.371098   97564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1127 23:44:55.371106   97564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1127 23:44:55.371116   97564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1127 23:44:55.371121   97564 command_runner.go:130] > # log_size_max = -1
	I1127 23:44:55.371129   97564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1127 23:44:55.371135   97564 command_runner.go:130] > # log_to_journald = false
	I1127 23:44:55.371141   97564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1127 23:44:55.371149   97564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1127 23:44:55.371157   97564 command_runner.go:130] > # Path to directory for container attach sockets.
	I1127 23:44:55.371162   97564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1127 23:44:55.371169   97564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1127 23:44:55.371173   97564 command_runner.go:130] > # bind_mount_prefix = ""
	I1127 23:44:55.371181   97564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1127 23:44:55.371185   97564 command_runner.go:130] > # read_only = false
	I1127 23:44:55.371204   97564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1127 23:44:55.371213   97564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1127 23:44:55.371219   97564 command_runner.go:130] > # live configuration reload.
	I1127 23:44:55.371223   97564 command_runner.go:130] > # log_level = "info"
	I1127 23:44:55.371231   97564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1127 23:44:55.371236   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:55.371242   97564 command_runner.go:130] > # log_filter = ""
	I1127 23:44:55.371248   97564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1127 23:44:55.371256   97564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1127 23:44:55.371262   97564 command_runner.go:130] > # separated by comma.
	I1127 23:44:55.371266   97564 command_runner.go:130] > # uid_mappings = ""
	I1127 23:44:55.371275   97564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1127 23:44:55.371283   97564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1127 23:44:55.371289   97564 command_runner.go:130] > # separated by comma.
	I1127 23:44:55.371294   97564 command_runner.go:130] > # gid_mappings = ""
	I1127 23:44:55.371302   97564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1127 23:44:55.371310   97564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:44:55.371318   97564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:44:55.371325   97564 command_runner.go:130] > # minimum_mappable_uid = -1
	I1127 23:44:55.371334   97564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1127 23:44:55.371342   97564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:44:55.371350   97564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:44:55.371357   97564 command_runner.go:130] > # minimum_mappable_gid = -1
	I1127 23:44:55.371363   97564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1127 23:44:55.371371   97564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1127 23:44:55.371378   97564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1127 23:44:55.371382   97564 command_runner.go:130] > # ctr_stop_timeout = 30
	I1127 23:44:55.371388   97564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1127 23:44:55.371398   97564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1127 23:44:55.371405   97564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1127 23:44:55.371410   97564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1127 23:44:55.371416   97564 command_runner.go:130] > # drop_infra_ctr = true
	I1127 23:44:55.371442   97564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1127 23:44:55.371450   97564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1127 23:44:55.371458   97564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1127 23:44:55.371464   97564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1127 23:44:55.371471   97564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1127 23:44:55.371478   97564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1127 23:44:55.371486   97564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1127 23:44:55.371493   97564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1127 23:44:55.371499   97564 command_runner.go:130] > # pinns_path = ""
	I1127 23:44:55.371506   97564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1127 23:44:55.371515   97564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1127 23:44:55.371523   97564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1127 23:44:55.371529   97564 command_runner.go:130] > # default_runtime = "runc"
	I1127 23:44:55.371534   97564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1127 23:44:55.371545   97564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1127 23:44:55.371556   97564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1127 23:44:55.371565   97564 command_runner.go:130] > # creation as a file is not desired either.
	I1127 23:44:55.371573   97564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1127 23:44:55.371580   97564 command_runner.go:130] > # the hostname is being managed dynamically.
	I1127 23:44:55.371585   97564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1127 23:44:55.371591   97564 command_runner.go:130] > # ]
	I1127 23:44:55.371597   97564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1127 23:44:55.371605   97564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1127 23:44:55.371614   97564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1127 23:44:55.371622   97564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1127 23:44:55.371628   97564 command_runner.go:130] > #
	I1127 23:44:55.371634   97564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1127 23:44:55.371641   97564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1127 23:44:55.371645   97564 command_runner.go:130] > #  runtime_type = "oci"
	I1127 23:44:55.371652   97564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1127 23:44:55.371657   97564 command_runner.go:130] > #  privileged_without_host_devices = false
	I1127 23:44:55.371663   97564 command_runner.go:130] > #  allowed_annotations = []
	I1127 23:44:55.371667   97564 command_runner.go:130] > # Where:
	I1127 23:44:55.371675   97564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1127 23:44:55.371681   97564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1127 23:44:55.371690   97564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1127 23:44:55.371698   97564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1127 23:44:55.371703   97564 command_runner.go:130] > #   in $PATH.
	I1127 23:44:55.371709   97564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1127 23:44:55.371716   97564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1127 23:44:55.371727   97564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1127 23:44:55.371734   97564 command_runner.go:130] > #   state.
	I1127 23:44:55.371740   97564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1127 23:44:55.371748   97564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1127 23:44:55.371757   97564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1127 23:44:55.371764   97564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1127 23:44:55.371770   97564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1127 23:44:55.371779   97564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1127 23:44:55.371783   97564 command_runner.go:130] > #   The currently recognized values are:
	I1127 23:44:55.371792   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1127 23:44:55.371801   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1127 23:44:55.371809   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1127 23:44:55.371818   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1127 23:44:55.371827   97564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1127 23:44:55.371835   97564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1127 23:44:55.371844   97564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1127 23:44:55.371853   97564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1127 23:44:55.371860   97564 command_runner.go:130] > #   should be moved to the container's cgroup
	I1127 23:44:55.371867   97564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1127 23:44:55.371874   97564 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1127 23:44:55.371878   97564 command_runner.go:130] > runtime_type = "oci"
	I1127 23:44:55.371885   97564 command_runner.go:130] > runtime_root = "/run/runc"
	I1127 23:44:55.371889   97564 command_runner.go:130] > runtime_config_path = ""
	I1127 23:44:55.371900   97564 command_runner.go:130] > monitor_path = ""
	I1127 23:44:55.371906   97564 command_runner.go:130] > monitor_cgroup = ""
	I1127 23:44:55.371910   97564 command_runner.go:130] > monitor_exec_cgroup = ""
	I1127 23:44:55.371950   97564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1127 23:44:55.371957   97564 command_runner.go:130] > # running containers
	I1127 23:44:55.371961   97564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1127 23:44:55.371970   97564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1127 23:44:55.371978   97564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1127 23:44:55.371986   97564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1127 23:44:55.371991   97564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1127 23:44:55.371998   97564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1127 23:44:55.372003   97564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1127 23:44:55.372011   97564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1127 23:44:55.372019   97564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1127 23:44:55.372025   97564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1127 23:44:55.372033   97564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1127 23:44:55.372041   97564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1127 23:44:55.372047   97564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1127 23:44:55.372057   97564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1127 23:44:55.372067   97564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1127 23:44:55.372076   97564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1127 23:44:55.372087   97564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1127 23:44:55.372097   97564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1127 23:44:55.372104   97564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1127 23:44:55.372114   97564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1127 23:44:55.372120   97564 command_runner.go:130] > # Example:
	I1127 23:44:55.372125   97564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1127 23:44:55.372132   97564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1127 23:44:55.372137   97564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1127 23:44:55.372144   97564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1127 23:44:55.372151   97564 command_runner.go:130] > # cpuset = 0
	I1127 23:44:55.372157   97564 command_runner.go:130] > # cpushares = "0-1"
	I1127 23:44:55.372164   97564 command_runner.go:130] > # Where:
	I1127 23:44:55.372174   97564 command_runner.go:130] > # The workload name is workload-type.
	I1127 23:44:55.372183   97564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1127 23:44:55.372190   97564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1127 23:44:55.372199   97564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1127 23:44:55.372208   97564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1127 23:44:55.372217   97564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1127 23:44:55.372222   97564 command_runner.go:130] > # 
	I1127 23:44:55.372229   97564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1127 23:44:55.372234   97564 command_runner.go:130] > #
	I1127 23:44:55.372239   97564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1127 23:44:55.372250   97564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1127 23:44:55.372258   97564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1127 23:44:55.372267   97564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1127 23:44:55.372275   97564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1127 23:44:55.372281   97564 command_runner.go:130] > [crio.image]
	I1127 23:44:55.372287   97564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1127 23:44:55.372294   97564 command_runner.go:130] > # default_transport = "docker://"
	I1127 23:44:55.372300   97564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1127 23:44:55.372309   97564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:44:55.372315   97564 command_runner.go:130] > # global_auth_file = ""
	I1127 23:44:55.372320   97564 command_runner.go:130] > # The image used to instantiate infra containers.
	I1127 23:44:55.372327   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:55.372332   97564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1127 23:44:55.372340   97564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1127 23:44:55.372348   97564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:44:55.372353   97564 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:44:55.372360   97564 command_runner.go:130] > # pause_image_auth_file = ""
	I1127 23:44:55.372365   97564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1127 23:44:55.372374   97564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1127 23:44:55.372383   97564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1127 23:44:55.372388   97564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1127 23:44:55.372395   97564 command_runner.go:130] > # pause_command = "/pause"
	I1127 23:44:55.372401   97564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1127 23:44:55.372409   97564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1127 23:44:55.372424   97564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1127 23:44:55.372432   97564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1127 23:44:55.372439   97564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1127 23:44:55.372444   97564 command_runner.go:130] > # signature_policy = ""
	I1127 23:44:55.372456   97564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1127 23:44:55.372464   97564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1127 23:44:55.372470   97564 command_runner.go:130] > # changing them here.
	I1127 23:44:55.372475   97564 command_runner.go:130] > # insecure_registries = [
	I1127 23:44:55.372480   97564 command_runner.go:130] > # ]
	I1127 23:44:55.372487   97564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1127 23:44:55.372494   97564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1127 23:44:55.372501   97564 command_runner.go:130] > # image_volumes = "mkdir"
	I1127 23:44:55.372506   97564 command_runner.go:130] > # Temporary directory to use for storing big files
	I1127 23:44:55.372513   97564 command_runner.go:130] > # big_files_temporary_dir = ""
	I1127 23:44:55.372519   97564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1127 23:44:55.372525   97564 command_runner.go:130] > # CNI plugins.
	I1127 23:44:55.372530   97564 command_runner.go:130] > [crio.network]
	I1127 23:44:55.372538   97564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1127 23:44:55.372547   97564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1127 23:44:55.372554   97564 command_runner.go:130] > # cni_default_network = ""
	I1127 23:44:55.372560   97564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1127 23:44:55.372566   97564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1127 23:44:55.372572   97564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1127 23:44:55.372578   97564 command_runner.go:130] > # plugin_dirs = [
	I1127 23:44:55.372582   97564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1127 23:44:55.372591   97564 command_runner.go:130] > # ]
	I1127 23:44:55.372597   97564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1127 23:44:55.372603   97564 command_runner.go:130] > [crio.metrics]
	I1127 23:44:55.372608   97564 command_runner.go:130] > # Globally enable or disable metrics support.
	I1127 23:44:55.372614   97564 command_runner.go:130] > # enable_metrics = false
	I1127 23:44:55.372619   97564 command_runner.go:130] > # Specify enabled metrics collectors.
	I1127 23:44:55.372626   97564 command_runner.go:130] > # Per default all metrics are enabled.
	I1127 23:44:55.372632   97564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1127 23:44:55.372640   97564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1127 23:44:55.372648   97564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1127 23:44:55.372654   97564 command_runner.go:130] > # metrics_collectors = [
	I1127 23:44:55.372661   97564 command_runner.go:130] > # 	"operations",
	I1127 23:44:55.372668   97564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1127 23:44:55.372673   97564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1127 23:44:55.372679   97564 command_runner.go:130] > # 	"operations_errors",
	I1127 23:44:55.372684   97564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1127 23:44:55.372690   97564 command_runner.go:130] > # 	"image_pulls_by_name",
	I1127 23:44:55.372694   97564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1127 23:44:55.372700   97564 command_runner.go:130] > # 	"image_pulls_failures",
	I1127 23:44:55.372705   97564 command_runner.go:130] > # 	"image_pulls_successes",
	I1127 23:44:55.372711   97564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1127 23:44:55.372716   97564 command_runner.go:130] > # 	"image_layer_reuse",
	I1127 23:44:55.372722   97564 command_runner.go:130] > # 	"containers_oom_total",
	I1127 23:44:55.372726   97564 command_runner.go:130] > # 	"containers_oom",
	I1127 23:44:55.372733   97564 command_runner.go:130] > # 	"processes_defunct",
	I1127 23:44:55.372737   97564 command_runner.go:130] > # 	"operations_total",
	I1127 23:44:55.372743   97564 command_runner.go:130] > # 	"operations_latency_seconds",
	I1127 23:44:55.372747   97564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1127 23:44:55.372754   97564 command_runner.go:130] > # 	"operations_errors_total",
	I1127 23:44:55.372759   97564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1127 23:44:55.372766   97564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1127 23:44:55.372770   97564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1127 23:44:55.372776   97564 command_runner.go:130] > # 	"image_pulls_success_total",
	I1127 23:44:55.372780   97564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1127 23:44:55.372787   97564 command_runner.go:130] > # 	"containers_oom_count_total",
	I1127 23:44:55.372794   97564 command_runner.go:130] > # ]
	I1127 23:44:55.372801   97564 command_runner.go:130] > # The port on which the metrics server will listen.
	I1127 23:44:55.372808   97564 command_runner.go:130] > # metrics_port = 9090
	I1127 23:44:55.372813   97564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1127 23:44:55.372820   97564 command_runner.go:130] > # metrics_socket = ""
	I1127 23:44:55.372825   97564 command_runner.go:130] > # The certificate for the secure metrics server.
	I1127 23:44:55.372833   97564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1127 23:44:55.372839   97564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1127 23:44:55.372846   97564 command_runner.go:130] > # certificate on any modification event.
	I1127 23:44:55.372850   97564 command_runner.go:130] > # metrics_cert = ""
	I1127 23:44:55.372857   97564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1127 23:44:55.372865   97564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1127 23:44:55.372872   97564 command_runner.go:130] > # metrics_key = ""
	I1127 23:44:55.372879   97564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1127 23:44:55.372886   97564 command_runner.go:130] > [crio.tracing]
	I1127 23:44:55.372892   97564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1127 23:44:55.372898   97564 command_runner.go:130] > # enable_tracing = false
	I1127 23:44:55.372903   97564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1127 23:44:55.372910   97564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1127 23:44:55.372915   97564 command_runner.go:130] > # Number of samples to collect per million spans.
	I1127 23:44:55.372923   97564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1127 23:44:55.372931   97564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1127 23:44:55.372935   97564 command_runner.go:130] > [crio.stats]
	I1127 23:44:55.372943   97564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1127 23:44:55.372951   97564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1127 23:44:55.372955   97564 command_runner.go:130] > # stats_collection_period = 0
	I1127 23:44:55.373042   97564 cni.go:84] Creating CNI manager for ""
	I1127 23:44:55.373053   97564 cni.go:136] 2 nodes found, recommending kindnet
	I1127 23:44:55.373062   97564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:44:55.373079   97564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-595051 NodeName:multinode-595051-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:44:55.373194   97564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-595051-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:44:55.373246   97564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-595051-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-595051 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:44:55.373290   97564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:44:55.381591   97564 command_runner.go:130] > kubeadm
	I1127 23:44:55.381616   97564 command_runner.go:130] > kubectl
	I1127 23:44:55.381622   97564 command_runner.go:130] > kubelet
	I1127 23:44:55.381635   97564 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:44:55.381678   97564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1127 23:44:55.389423   97564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1127 23:44:55.405015   97564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:44:55.420681   97564 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:44:55.423926   97564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:44:55.433934   97564 host.go:66] Checking if "multinode-595051" exists ...
	I1127 23:44:55.434279   97564 config.go:182] Loaded profile config "multinode-595051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:44:55.434204   97564 start.go:304] JoinCluster: &{Name:multinode-595051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-595051 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:44:55.434337   97564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1127 23:44:55.434402   97564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:44:55.451500   97564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:44:55.593846   97564 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ozf0zu.o697gssv9dagnzry --discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 
	I1127 23:44:55.593897   97564 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:44:55.593941   97564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ozf0zu.o697gssv9dagnzry --discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-595051-m02"
	I1127 23:44:55.627440   97564 command_runner.go:130] > [preflight] Running pre-flight checks
	I1127 23:44:55.654953   97564 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:44:55.654990   97564 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1046-gcp
	I1127 23:44:55.655000   97564 command_runner.go:130] > OS: Linux
	I1127 23:44:55.655008   97564 command_runner.go:130] > CGROUPS_CPU: enabled
	I1127 23:44:55.655014   97564 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1127 23:44:55.655019   97564 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1127 23:44:55.655024   97564 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1127 23:44:55.655029   97564 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1127 23:44:55.655034   97564 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1127 23:44:55.655040   97564 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1127 23:44:55.655055   97564 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1127 23:44:55.655062   97564 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1127 23:44:55.732852   97564 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1127 23:44:55.732878   97564 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1127 23:44:55.757536   97564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:44:55.757652   97564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:44:55.757674   97564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1127 23:44:55.837104   97564 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1127 23:44:57.851541   97564 command_runner.go:130] > This node has joined the cluster:
	I1127 23:44:57.851598   97564 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1127 23:44:57.851637   97564 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1127 23:44:57.851657   97564 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1127 23:44:57.853992   97564 command_runner.go:130] ! W1127 23:44:55.626944    1106 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1127 23:44:57.854029   97564 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1127 23:44:57.854042   97564 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:44:57.854088   97564 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ozf0zu.o697gssv9dagnzry --discovery-token-ca-cert-hash sha256:4d50fd6fa1338d5979f67697fdf2bc9944f7b911d13890c8a839ee1a72bd8682 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-595051-m02": (2.260120663s)
	I1127 23:44:57.854115   97564 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1127 23:44:58.015696   97564 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1127 23:44:58.015733   97564 start.go:306] JoinCluster complete in 2.581529457s
	I1127 23:44:58.015744   97564 cni.go:84] Creating CNI manager for ""
	I1127 23:44:58.015749   97564 cni.go:136] 2 nodes found, recommending kindnet
	I1127 23:44:58.015789   97564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:44:58.019460   97564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1127 23:44:58.019494   97564 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1127 23:44:58.019504   97564 command_runner.go:130] > Device: 37h/55d	Inode: 545259      Links: 1
	I1127 23:44:58.019512   97564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:44:58.019520   97564 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1127 23:44:58.019528   97564 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1127 23:44:58.019537   97564 command_runner.go:130] > Change: 2023-11-27 23:25:11.484300745 +0000
	I1127 23:44:58.019551   97564 command_runner.go:130] >  Birth: 2023-11-27 23:25:11.460298307 +0000
	I1127 23:44:58.019628   97564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 23:44:58.019643   97564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:44:58.035878   97564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:44:58.234731   97564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1127 23:44:58.242383   97564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1127 23:44:58.244667   97564 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1127 23:44:58.258139   97564 command_runner.go:130] > daemonset.apps/kindnet configured
	I1127 23:44:58.262812   97564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:44:58.263209   97564 kapi.go:59] client config for multinode-595051: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:44:58.263566   97564 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:44:58.263580   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:58.263587   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:58.263595   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:58.265561   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:58.265585   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:58.265594   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:58.265620   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:58.265632   97564 round_trippers.go:580]     Content-Length: 291
	I1127 23:44:58.265644   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:58 GMT
	I1127 23:44:58.265659   97564 round_trippers.go:580]     Audit-Id: ab42007f-7209-44a7-91c7-6a49b0fe602b
	I1127 23:44:58.265667   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:58.265678   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:58.265708   97564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"26a7bde8-57dd-4f08-8c71-2df4ee1c3187","resourceVersion":"401","creationTimestamp":"2023-11-27T23:44:24Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1127 23:44:58.265804   97564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-595051" context rescaled to 1 replicas
	I1127 23:44:58.265842   97564 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:44:58.267816   97564 out.go:177] * Verifying Kubernetes components...
	I1127 23:44:58.269546   97564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:44:58.281247   97564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:44:58.281593   97564 kapi.go:59] client config for multinode-595051: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/profiles/multinode-595051/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:44:58.281847   97564 node_ready.go:35] waiting up to 6m0s for node "multinode-595051-m02" to be "Ready" ...
	I1127 23:44:58.281905   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:44:58.281913   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:58.281920   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:58.281928   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:58.283842   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:58.283880   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:58.283890   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:58 GMT
	I1127 23:44:58.283909   97564 round_trippers.go:580]     Audit-Id: 5c62e001-61d5-4789-a123-0fd9de9ac237
	I1127 23:44:58.283917   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:58.283925   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:58.283937   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:58.283949   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:58.284085   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:44:58.284436   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:44:58.284452   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:58.284459   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:58.284465   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:58.286248   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:44:58.286266   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:58.286275   97564 round_trippers.go:580]     Audit-Id: 84a35f59-8fda-4a77-a5b0-daed7158df74
	I1127 23:44:58.286284   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:58.286292   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:58.286300   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:58.286308   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:58.286325   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:58 GMT
	I1127 23:44:58.286452   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:44:58.787357   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:44:58.787387   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:58.787399   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:58.787409   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:58.789684   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:58.789710   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:58.789719   97564 round_trippers.go:580]     Audit-Id: b5e5c6c8-c373-41f2-8748-ecf31319984c
	I1127 23:44:58.789727   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:58.789735   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:58.789743   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:58.789756   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:58.789768   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:58 GMT
	I1127 23:44:58.789890   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:44:59.287322   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:44:59.287348   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:59.287362   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:59.287368   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:59.289680   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:59.289701   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:59.289711   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:59 GMT
	I1127 23:44:59.289720   97564 round_trippers.go:580]     Audit-Id: 741a1ffd-635b-478e-a6f0-178fc9a1f670
	I1127 23:44:59.289727   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:59.289735   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:59.289752   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:59.289760   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:59.289884   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:44:59.787320   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:44:59.787348   97564 round_trippers.go:469] Request Headers:
	I1127 23:44:59.787368   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:44:59.787376   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:44:59.789741   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:44:59.789774   97564 round_trippers.go:577] Response Headers:
	I1127 23:44:59.789785   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:44:59.789794   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:44:59.789803   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:44:59 GMT
	I1127 23:44:59.789813   97564 round_trippers.go:580]     Audit-Id: 510223bc-f107-44fb-a48d-8e504016d8d5
	I1127 23:44:59.789825   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:44:59.789836   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:44:59.789949   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:45:00.287330   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:00.287355   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:00.287363   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:00.287369   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:00.289790   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:00.289816   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:00.289827   97564 round_trippers.go:580]     Audit-Id: 97fe3f29-737d-4191-876e-41a476697af8
	I1127 23:45:00.289836   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:00.289850   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:00.289868   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:00.289877   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:00.289885   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:00 GMT
	I1127 23:45:00.290105   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:45:00.290441   97564 node_ready.go:58] node "multinode-595051-m02" has status "Ready":"False"
	I1127 23:45:00.787349   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:00.787371   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:00.787379   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:00.787386   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:00.789682   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:00.789706   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:00.789715   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:00.789723   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:00.789730   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:00 GMT
	I1127 23:45:00.789738   97564 round_trippers.go:580]     Audit-Id: 03a89c86-1c42-4fe7-bd6c-2fbad475e10d
	I1127 23:45:00.789748   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:00.789761   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:00.789962   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:45:01.287342   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:01.287375   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:01.287383   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:01.287389   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:01.289592   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:01.289615   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:01.289623   97564 round_trippers.go:580]     Audit-Id: 0880839b-24d4-4ccc-8e5e-8cb42ddd4f1e
	I1127 23:45:01.289631   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:01.289637   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:01.289645   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:01.289652   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:01.289660   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:01 GMT
	I1127 23:45:01.289772   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:45:01.787314   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:01.787342   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:01.787354   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:01.787363   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:01.789722   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:01.789747   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:01.789756   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:01.789763   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:01.789771   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:01 GMT
	I1127 23:45:01.789778   97564 round_trippers.go:580]     Audit-Id: 07d2c5a9-f4de-41d2-9b67-8fb1ee6837de
	I1127 23:45:01.789786   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:01.789798   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:01.789920   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"450","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1127 23:45:02.287383   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:02.287421   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:02.287433   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:02.287444   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:02.289850   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:02.289873   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:02.289900   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:02.289908   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:02.289917   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:02.289936   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:02.289949   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:02 GMT
	I1127 23:45:02.289958   97564 round_trippers.go:580]     Audit-Id: b9a29489-2a07-4664-b6a9-3b479fd68fcd
	I1127 23:45:02.290111   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:02.291030   97564 node_ready.go:58] node "multinode-595051-m02" has status "Ready":"False"
	I1127 23:45:02.787319   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:02.787361   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:02.787369   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:02.787376   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:02.789544   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:02.789571   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:02.789582   97564 round_trippers.go:580]     Audit-Id: 5d271f40-d54a-4382-9746-ddfbfd6a665d
	I1127 23:45:02.789592   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:02.789600   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:02.789609   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:02.789625   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:02.789635   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:02 GMT
	I1127 23:45:02.789756   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:03.287886   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:03.287918   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:03.287927   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:03.287933   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:03.290422   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:03.290450   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:03.290459   97564 round_trippers.go:580]     Audit-Id: 8bc5b0b6-a65f-4ef6-8965-082086266cb4
	I1127 23:45:03.290467   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:03.290475   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:03.290483   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:03.290491   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:03.290508   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:03 GMT
	I1127 23:45:03.290643   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:03.787212   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:03.787243   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:03.787256   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:03.787267   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:03.789653   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:03.789691   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:03.789703   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:03.789712   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:03.789721   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:03.789730   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:03 GMT
	I1127 23:45:03.789737   97564 round_trippers.go:580]     Audit-Id: f44a4d12-292c-4e5e-85a0-cb8669baed28
	I1127 23:45:03.789746   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:03.789872   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:04.287355   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:04.287383   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:04.287391   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:04.287398   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:04.289841   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:04.289869   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:04.289879   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:04.289884   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:04.289890   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:04 GMT
	I1127 23:45:04.289898   97564 round_trippers.go:580]     Audit-Id: e1380da8-c8e8-4e05-b3f6-7605de17522b
	I1127 23:45:04.289906   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:04.289914   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:04.290123   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:04.787346   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:04.787433   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:04.787454   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:04.787473   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:04.789601   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:04.789629   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:04.789639   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:04.789648   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:04.789656   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:04 GMT
	I1127 23:45:04.789665   97564 round_trippers.go:580]     Audit-Id: d83cb02f-aa21-4a53-9524-be6b4da3a774
	I1127 23:45:04.789674   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:04.789685   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:04.789791   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:04.790112   97564 node_ready.go:58] node "multinode-595051-m02" has status "Ready":"False"
	I1127 23:45:05.287326   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:05.287350   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:05.287378   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:05.287385   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:05.289775   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:05.289807   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:05.289816   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:05.289823   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:05.289831   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:05 GMT
	I1127 23:45:05.289842   97564 round_trippers.go:580]     Audit-Id: b0141291-37e0-4f7e-9c01-577e376f15da
	I1127 23:45:05.289850   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:05.289859   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:05.289999   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:05.787295   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:05.787337   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:05.787347   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:05.787354   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:05.789813   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:05.789841   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:05.789851   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:05.789860   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:05.789869   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:05.789877   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:05 GMT
	I1127 23:45:05.789887   97564 round_trippers.go:580]     Audit-Id: 22acbc00-e91e-444d-a815-f0d52f977ce9
	I1127 23:45:05.789898   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:05.790161   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:06.287285   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:06.287309   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:06.287317   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:06.287322   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:06.289575   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:06.289596   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:06.289603   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:06.289610   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:06.289618   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:06.289625   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:06.289633   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:06 GMT
	I1127 23:45:06.289644   97564 round_trippers.go:580]     Audit-Id: 151ee440-d547-44e4-86be-b9a7e6f36140
	I1127 23:45:06.289767   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:06.787345   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:06.787370   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:06.787378   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:06.787384   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:06.789855   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:06.789884   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:06.789894   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:06.789904   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:06 GMT
	I1127 23:45:06.789912   97564 round_trippers.go:580]     Audit-Id: 8c89d55d-53ef-4d6c-be9e-25d6b8e7821e
	I1127 23:45:06.789922   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:06.789931   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:06.789944   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:06.790090   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:06.790440   97564 node_ready.go:58] node "multinode-595051-m02" has status "Ready":"False"
	I1127 23:45:07.287889   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:07.287918   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:07.287944   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:07.287954   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:07.290416   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:07.290434   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:07.290441   97564 round_trippers.go:580]     Audit-Id: 98678df0-9599-433d-b838-cb6b13d56549
	I1127 23:45:07.290446   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:07.290451   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:07.290456   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:07.290461   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:07.290466   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:07 GMT
	I1127 23:45:07.290624   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:07.787229   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:07.787270   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:07.787282   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:07.787292   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:07.789540   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:07.789564   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:07.789576   97564 round_trippers.go:580]     Audit-Id: 77b2971f-0133-4f09-837b-52bf33eb87c8
	I1127 23:45:07.789583   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:07.789591   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:07.789601   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:07.789608   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:07.789617   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:07 GMT
	I1127 23:45:07.789725   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"467","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1127 23:45:08.287575   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:08.287596   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:08.287604   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:08.287610   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:08.290085   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:08.290115   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:08.290126   97564 round_trippers.go:580]     Audit-Id: 19564d96-c748-49ce-bcbc-3d0c498ecd60
	I1127 23:45:08.290135   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:08.290142   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:08.290151   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:08.290164   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:08.290176   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:08 GMT
	I1127 23:45:08.290337   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:08.787919   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:08.787948   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:08.787957   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:08.787964   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:08.790319   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:08.790339   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:08.790345   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:08.790351   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:08.790357   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:08.790366   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:08 GMT
	I1127 23:45:08.790375   97564 round_trippers.go:580]     Audit-Id: 4ec546f2-807c-453f-85f5-282b32909b66
	I1127 23:45:08.790383   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:08.790493   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:08.790804   97564 node_ready.go:58] node "multinode-595051-m02" has status "Ready":"False"
	I1127 23:45:09.287085   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:09.287112   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:09.287121   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:09.287127   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:09.289409   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:09.289447   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:09.289459   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:09.289468   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:09.289477   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:09 GMT
	I1127 23:45:09.289489   97564 round_trippers.go:580]     Audit-Id: dc088aae-eb58-4cc9-8f85-743f456abd54
	I1127 23:45:09.289498   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:09.289509   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:09.289655   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:09.787125   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:09.787152   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:09.787167   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:09.787173   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:09.789714   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:09.789740   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:09.789751   97564 round_trippers.go:580]     Audit-Id: 791e4f97-8524-49ee-a555-4359ff01e682
	I1127 23:45:09.789760   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:09.789766   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:09.789772   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:09.789777   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:09.789788   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:09 GMT
	I1127 23:45:09.789933   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:10.287283   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:10.287314   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:10.287323   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:10.287329   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:10.289947   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:10.289980   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:10.289990   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:10.289998   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:10.290008   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:10 GMT
	I1127 23:45:10.290017   97564 round_trippers.go:580]     Audit-Id: 5e18c7a4-2110-4451-9afd-6bf593f10852
	I1127 23:45:10.290037   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:10.290066   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:10.290216   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:10.787335   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:10.787361   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:10.787373   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:10.787382   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:10.789769   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:10.789796   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:10.789804   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:10.789814   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:10 GMT
	I1127 23:45:10.789823   97564 round_trippers.go:580]     Audit-Id: 50ac5c46-16c4-4883-95fe-484d955de2e5
	I1127 23:45:10.789830   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:10.789845   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:10.789852   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:10.789951   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:11.287322   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:11.287347   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:11.287360   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:11.287394   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:11.291968   97564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:45:11.291998   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:11.292008   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:11.292016   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:11.292025   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:11.292033   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:11 GMT
	I1127 23:45:11.292044   97564 round_trippers.go:580]     Audit-Id: eb762eed-5dd6-4788-a836-9d1046e79c7e
	I1127 23:45:11.292050   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:11.292217   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:11.292576   97564 node_ready.go:58] node "multinode-595051-m02" has status "Ready":"False"
	I1127 23:45:11.787921   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:11.787944   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:11.787954   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:11.787963   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:11.790436   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:11.790460   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:11.790467   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:11 GMT
	I1127 23:45:11.790472   97564 round_trippers.go:580]     Audit-Id: 8076a556-0e5f-4c17-aa00-ebd9849f6008
	I1127 23:45:11.790477   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:11.790483   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:11.790488   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:11.790495   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:11.790567   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:12.287273   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:12.287300   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:12.287308   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:12.287314   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:12.289509   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:12.289539   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:12.289549   97564 round_trippers.go:580]     Audit-Id: 3d4f6af8-d169-41a3-9153-563cbf8f1150
	I1127 23:45:12.289559   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:12.289568   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:12.289577   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:12.289585   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:12.289595   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:12 GMT
	I1127 23:45:12.289707   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:12.787243   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:12.787265   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:12.787273   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:12.787279   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:12.789716   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:12.789746   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:12.789756   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:12.789764   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:12.789772   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:12.789780   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:12.789791   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:12 GMT
	I1127 23:45:12.789800   97564 round_trippers.go:580]     Audit-Id: 8b883fdc-03c2-4b71-826c-b89808d212d2
	I1127 23:45:12.789891   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:13.287834   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:13.287858   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:13.287867   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:13.287873   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:13.290135   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:13.290161   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:13.290170   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:13.290178   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:13.290186   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:13 GMT
	I1127 23:45:13.290194   97564 round_trippers.go:580]     Audit-Id: 2e3261dd-a529-496d-8a9f-193c1ab37a13
	I1127 23:45:13.290202   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:13.290215   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:13.290323   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:13.786915   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:13.786939   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:13.786947   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:13.786953   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:13.789154   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:13.789179   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:13.789189   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:13.789196   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:13.789204   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:13.789216   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:13 GMT
	I1127 23:45:13.789223   97564 round_trippers.go:580]     Audit-Id: dcc212e0-ba1f-41a0-acf6-e5738af0f49c
	I1127 23:45:13.789234   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:13.789320   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:13.789602   97564 node_ready.go:58] node "multinode-595051-m02" has status "Ready":"False"
	I1127 23:45:14.286943   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:14.286968   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:14.286978   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:14.286985   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:14.289483   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:14.289511   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:14.289520   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:14.289526   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:14.289532   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:14 GMT
	I1127 23:45:14.289537   97564 round_trippers.go:580]     Audit-Id: acc24c51-a504-4ef6-a2ec-8a99e21942cf
	I1127 23:45:14.289542   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:14.289548   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:14.289803   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:14.787315   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:14.787337   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:14.787345   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:14.787356   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:14.789776   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:14.789798   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:14.789805   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:14.789810   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:14.789815   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:14.789820   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:14.789827   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:14 GMT
	I1127 23:45:14.789835   97564 round_trippers.go:580]     Audit-Id: fe63cf83-2c8e-49b1-ad76-e28165f17501
	I1127 23:45:14.790004   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"475","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I1127 23:45:15.287324   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:15.287348   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.287356   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.287363   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.289365   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:45:15.289413   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.289423   97564 round_trippers.go:580]     Audit-Id: 893305d5-0908-4777-9c24-3630b728d168
	I1127 23:45:15.289431   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.289440   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.289449   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.289462   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.289470   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.289586   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"493","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1127 23:45:15.289888   97564 node_ready.go:49] node "multinode-595051-m02" has status "Ready":"True"
	I1127 23:45:15.289904   97564 node_ready.go:38] duration metric: took 17.008042638s waiting for node "multinode-595051-m02" to be "Ready" ...
	I1127 23:45:15.289915   97564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:45:15.289972   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:45:15.289980   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.289987   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.289993   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.292862   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:15.292885   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.292892   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.292898   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.292906   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.292915   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.292927   97564 round_trippers.go:580]     Audit-Id: 843f67ec-65b5-4d17-bda9-c253364ddb97
	I1127 23:45:15.292938   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.293405   97564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"397","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 69097 chars]
	I1127 23:45:15.295915   97564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-px5k6" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.296002   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-px5k6
	I1127 23:45:15.296015   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.296024   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.296033   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.297686   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:45:15.297718   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.297728   97564 round_trippers.go:580]     Audit-Id: b5b15aab-29b1-4a6e-82f4-03821e5ec815
	I1127 23:45:15.297736   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.297743   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.297754   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.297763   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.297778   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.297898   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-px5k6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"31070a53-8a76-42ef-ba74-254dc4e13178","resourceVersion":"397","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f162c176-389a-4758-b0d8-e22eca3ff811","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f162c176-389a-4758-b0d8-e22eca3ff811\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1127 23:45:15.298352   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:45:15.298370   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.298377   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.298385   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.300883   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:15.300905   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.300915   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.300923   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.300933   97564 round_trippers.go:580]     Audit-Id: f1acdb3b-1e73-4d58-96bc-d62dc395e006
	I1127 23:45:15.300945   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.300953   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.300963   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.301085   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:45:15.301419   97564 pod_ready.go:92] pod "coredns-5dd5756b68-px5k6" in "kube-system" namespace has status "Ready":"True"
	I1127 23:45:15.301434   97564 pod_ready.go:81] duration metric: took 5.493325ms waiting for pod "coredns-5dd5756b68-px5k6" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.301442   97564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.301484   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-595051
	I1127 23:45:15.301491   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.301498   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.301503   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.303089   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:45:15.303105   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.303114   97564 round_trippers.go:580]     Audit-Id: 1f303fb8-fbaf-4556-a9c2-d46c7935b2df
	I1127 23:45:15.303123   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.303130   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.303142   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.303148   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.303154   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.303284   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-595051","namespace":"kube-system","uid":"c9ffa2b1-6f5a-4bda-9e11-9f3b362ebae7","resourceVersion":"416","creationTimestamp":"2023-11-27T23:44:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.mirror":"4644bba5e6a2283bfa7cf03a530d41d3","kubernetes.io/config.seen":"2023-11-27T23:44:18.731160273Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1127 23:45:15.303639   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:45:15.303652   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.303659   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.303666   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.305333   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:45:15.305376   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.305387   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.305400   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.305409   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.305421   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.305433   97564 round_trippers.go:580]     Audit-Id: 4fc54404-466d-45fc-830a-09858a27430c
	I1127 23:45:15.305444   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.305529   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:45:15.305821   97564 pod_ready.go:92] pod "etcd-multinode-595051" in "kube-system" namespace has status "Ready":"True"
	I1127 23:45:15.305837   97564 pod_ready.go:81] duration metric: took 4.388273ms waiting for pod "etcd-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.305850   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.305891   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-595051
	I1127 23:45:15.305897   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.305904   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.305912   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.307555   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:45:15.307573   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.307582   97564 round_trippers.go:580]     Audit-Id: 00d709ba-ccd0-4cca-955b-ee36d63d76f7
	I1127 23:45:15.307589   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.307601   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.307610   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.307622   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.307632   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.307750   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-595051","namespace":"kube-system","uid":"111b6195-41a6-4248-9c66-4d3d88d8628d","resourceVersion":"418","creationTimestamp":"2023-11-27T23:44:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6fe9890b037e16c7bf188f651d40131d","kubernetes.io/config.mirror":"6fe9890b037e16c7bf188f651d40131d","kubernetes.io/config.seen":"2023-11-27T23:44:18.731154530Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1127 23:45:15.308144   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:45:15.308158   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.308167   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.308175   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.309706   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:45:15.309726   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.309733   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.309744   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.309751   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.309756   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.309761   97564 round_trippers.go:580]     Audit-Id: 060ea8f3-74a8-462a-8f2d-884fc23b4050
	I1127 23:45:15.309769   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.309851   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:45:15.310204   97564 pod_ready.go:92] pod "kube-apiserver-multinode-595051" in "kube-system" namespace has status "Ready":"True"
	I1127 23:45:15.310225   97564 pod_ready.go:81] duration metric: took 4.367645ms waiting for pod "kube-apiserver-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.310233   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.310278   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-595051
	I1127 23:45:15.310286   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.310293   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.310302   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.311924   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:45:15.311943   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.311953   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.311962   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.311970   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.311979   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.311985   97564 round_trippers.go:580]     Audit-Id: e4b4585c-81b1-42b1-a4d9-d48c84dbddd7
	I1127 23:45:15.311998   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.312104   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-595051","namespace":"kube-system","uid":"fe43d1dc-0983-4cc9-b07a-9a17a606bc82","resourceVersion":"415","creationTimestamp":"2023-11-27T23:44:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"502f83a658bafeb48b025970fae2234e","kubernetes.io/config.mirror":"502f83a658bafeb48b025970fae2234e","kubernetes.io/config.seen":"2023-11-27T23:44:24.605671216Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1127 23:45:15.312489   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:45:15.312504   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.312518   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.312527   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.314010   97564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:45:15.314028   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.314038   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.314047   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.314071   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.314134   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.314163   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.314169   97564 round_trippers.go:580]     Audit-Id: 4e790d5b-2687-4e38-817f-e03e880f8009
	I1127 23:45:15.314262   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:45:15.314557   97564 pod_ready.go:92] pod "kube-controller-manager-multinode-595051" in "kube-system" namespace has status "Ready":"True"
	I1127 23:45:15.314574   97564 pod_ready.go:81] duration metric: took 4.334564ms waiting for pod "kube-controller-manager-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.314583   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gjwvt" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.487698   97564 request.go:629] Waited for 173.065087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjwvt
	I1127 23:45:15.487755   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gjwvt
	I1127 23:45:15.487762   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.487776   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.487789   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.490220   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:15.490246   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.490257   97564 round_trippers.go:580]     Audit-Id: b4492db1-7bb3-43ca-b24d-a673883ee39d
	I1127 23:45:15.490266   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.490274   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.490283   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.490292   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.490304   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.490414   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gjwvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b33c9c4-54cf-49e4-a97a-d782fa80c2d8","resourceVersion":"383","creationTimestamp":"2023-11-27T23:44:37Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1f3c9e31-edf0-467a-8ea0-336e61619a0e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f3c9e31-edf0-467a-8ea0-336e61619a0e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1127 23:45:15.687986   97564 request.go:629] Waited for 197.12797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:45:15.688040   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:45:15.688048   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.688055   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.688080   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.690612   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:15.690636   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.690652   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.690660   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.690668   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.690676   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.690685   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.690694   97564 round_trippers.go:580]     Audit-Id: d5553128-5bd9-4b04-9ea2-e6c681366379
	I1127 23:45:15.690806   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:45:15.691162   97564 pod_ready.go:92] pod "kube-proxy-gjwvt" in "kube-system" namespace has status "Ready":"True"
	I1127 23:45:15.691190   97564 pod_ready.go:81] duration metric: took 376.599741ms waiting for pod "kube-proxy-gjwvt" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.691203   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hsm4c" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:15.887625   97564 request.go:629] Waited for 196.355975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsm4c
	I1127 23:45:15.887679   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsm4c
	I1127 23:45:15.887684   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:15.887693   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:15.887699   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:15.889995   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:15.890024   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:15.890033   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:15.890039   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:15.890046   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:15.890066   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:15 GMT
	I1127 23:45:15.890072   97564 round_trippers.go:580]     Audit-Id: 2be08ec7-a5df-44b4-b2b6-0c8345325f4b
	I1127 23:45:15.890077   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:15.890192   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hsm4c","generateName":"kube-proxy-","namespace":"kube-system","uid":"2820f084-fd15-4d14-9e7a-1d7b80b6e642","resourceVersion":"482","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1f3c9e31-edf0-467a-8ea0-336e61619a0e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f3c9e31-edf0-467a-8ea0-336e61619a0e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1127 23:45:16.087959   97564 request.go:629] Waited for 197.353325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:16.088091   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051-m02
	I1127 23:45:16.088102   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:16.088109   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:16.088116   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:16.090344   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:16.090367   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:16.090384   97564 round_trippers.go:580]     Audit-Id: 0ac0b2be-6455-49c9-b7f4-021760c1f4a6
	I1127 23:45:16.090390   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:16.090395   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:16.090406   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:16.090411   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:16.090417   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:16 GMT
	I1127 23:45:16.090518   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051-m02","uid":"b0735e19-310f-432f-8fcd-29a9662cb043","resourceVersion":"493","creationTimestamp":"2023-11-27T23:44:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1127 23:45:16.090819   97564 pod_ready.go:92] pod "kube-proxy-hsm4c" in "kube-system" namespace has status "Ready":"True"
	I1127 23:45:16.090845   97564 pod_ready.go:81] duration metric: took 399.625601ms waiting for pod "kube-proxy-hsm4c" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:16.090858   97564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:16.288323   97564 request.go:629] Waited for 197.404036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-595051
	I1127 23:45:16.288411   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-595051
	I1127 23:45:16.288423   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:16.288435   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:16.288464   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:16.290840   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:16.290860   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:16.290866   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:16 GMT
	I1127 23:45:16.290872   97564 round_trippers.go:580]     Audit-Id: 5b616038-fad8-489f-8136-c973bcf89b27
	I1127 23:45:16.290884   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:16.290890   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:16.290895   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:16.290904   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:16.291003   97564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-595051","namespace":"kube-system","uid":"661b7e59-8eb3-4e67-b3d6-7f2cd255b11d","resourceVersion":"417","creationTimestamp":"2023-11-27T23:44:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c87ccbe06dcbf99adfb998536f155f5a","kubernetes.io/config.mirror":"c87ccbe06dcbf99adfb998536f155f5a","kubernetes.io/config.seen":"2023-11-27T23:44:24.605675386Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:44:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1127 23:45:16.487651   97564 request.go:629] Waited for 196.303637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:45:16.487710   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-595051
	I1127 23:45:16.487715   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:16.487722   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:16.487728   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:16.490116   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:16.490136   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:16.490142   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:16.490148   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:16.490153   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:16 GMT
	I1127 23:45:16.490158   97564 round_trippers.go:580]     Audit-Id: a248d9c4-8d43-4874-9c90-6566fa2d8eb1
	I1127 23:45:16.490163   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:16.490168   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:16.490276   97564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:44:21Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1127 23:45:16.490602   97564 pod_ready.go:92] pod "kube-scheduler-multinode-595051" in "kube-system" namespace has status "Ready":"True"
	I1127 23:45:16.490618   97564 pod_ready.go:81] duration metric: took 399.754694ms waiting for pod "kube-scheduler-multinode-595051" in "kube-system" namespace to be "Ready" ...
	I1127 23:45:16.490629   97564 pod_ready.go:38] duration metric: took 1.200699856s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:45:16.490645   97564 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:45:16.490689   97564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:45:16.501558   97564 system_svc.go:56] duration metric: took 10.907371ms WaitForService to wait for kubelet.
	I1127 23:45:16.501585   97564 kubeadm.go:581] duration metric: took 18.235702504s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:45:16.501611   97564 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:45:16.688082   97564 request.go:629] Waited for 186.384478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1127 23:45:16.688148   97564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1127 23:45:16.688153   97564 round_trippers.go:469] Request Headers:
	I1127 23:45:16.688161   97564 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:45:16.688167   97564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:45:16.690595   97564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:45:16.690617   97564 round_trippers.go:577] Response Headers:
	I1127 23:45:16.690628   97564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:45:16.690637   97564 round_trippers.go:580]     Content-Type: application/json
	I1127 23:45:16.690646   97564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e8fe257f-e4ff-4a03-9b78-3b004027fa4a
	I1127 23:45:16.690655   97564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5bbadc7f-542d-467f-9a11-fd839c924e33
	I1127 23:45:16.690664   97564 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:45:16 GMT
	I1127 23:45:16.690676   97564 round_trippers.go:580]     Audit-Id: 02d55552-ee20-4034-b81e-ab65e35ffa5f
	I1127 23:45:16.690809   97564 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"multinode-595051","uid":"4884eb3c-b94a-408b-a7a5-1be15bbb4508","resourceVersion":"387","creationTimestamp":"2023-11-27T23:44:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-595051","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-595051","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_44_25_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I1127 23:45:16.691298   97564 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 23:45:16.691315   97564 node_conditions.go:123] node cpu capacity is 8
	I1127 23:45:16.691326   97564 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1127 23:45:16.691334   97564 node_conditions.go:123] node cpu capacity is 8
	I1127 23:45:16.691339   97564 node_conditions.go:105] duration metric: took 189.723753ms to run NodePressure ...
	I1127 23:45:16.691349   97564 start.go:228] waiting for startup goroutines ...
	I1127 23:45:16.691386   97564 start.go:242] writing updated cluster config ...
	I1127 23:45:16.691648   97564 ssh_runner.go:195] Run: rm -f paused
	I1127 23:45:16.736442   97564 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 23:45:16.738818   97564 out.go:177] * Done! kubectl is now configured to use "multinode-595051" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 23:44:40 multinode-595051 crio[960]: time="2023-11-27 23:44:40.430639878Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f52a64b84c7a949b2cb56545d37acf92a67557824a7f85a49c5742e4d08f983/merged/etc/passwd: no such file or directory"
	Nov 27 23:44:40 multinode-595051 crio[960]: time="2023-11-27 23:44:40.430672752Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f52a64b84c7a949b2cb56545d37acf92a67557824a7f85a49c5742e4d08f983/merged/etc/group: no such file or directory"
	Nov 27 23:44:40 multinode-595051 crio[960]: time="2023-11-27 23:44:40.467764970Z" level=info msg="Created container f67423c183c482c54b693f0e601d832e5326ac324490fdbf4a3b093b91d51eb5: kube-system/storage-provisioner/storage-provisioner" id=cbc812cf-c27c-49af-bec4-a786e449e02a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 23:44:40 multinode-595051 crio[960]: time="2023-11-27 23:44:40.468455782Z" level=info msg="Starting container: f67423c183c482c54b693f0e601d832e5326ac324490fdbf4a3b093b91d51eb5" id=5bee8b69-d710-41b8-91af-d0d493e7fdd9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 23:44:40 multinode-595051 crio[960]: time="2023-11-27 23:44:40.476186667Z" level=info msg="Started container" PID=2374 containerID=f67423c183c482c54b693f0e601d832e5326ac324490fdbf4a3b093b91d51eb5 description=kube-system/storage-provisioner/storage-provisioner id=5bee8b69-d710-41b8-91af-d0d493e7fdd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d99be98b7bfb01a9e22ecb225bbe17bd391895952df5bd88bb8408b8eb811b5
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.732559476Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-zp72z/POD" id=f8617997-a31f-4974-9ffe-2a8157f968fa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.732632642Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.747658670Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-zp72z Namespace:default ID:dfe520f4e7fba70d39c421c092fd4d19741dad9d46aa68f234ad5e36d55185da UID:0e969491-7053-43be-b692-85897f7451d4 NetNS:/var/run/netns/c2d178d5-85d8-42b2-a53a-b244c21e3437 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.747692861Z" level=info msg="Adding pod default_busybox-5bc68d56bd-zp72z to CNI network \"kindnet\" (type=ptp)"
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.756884573Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-zp72z Namespace:default ID:dfe520f4e7fba70d39c421c092fd4d19741dad9d46aa68f234ad5e36d55185da UID:0e969491-7053-43be-b692-85897f7451d4 NetNS:/var/run/netns/c2d178d5-85d8-42b2-a53a-b244c21e3437 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.757031388Z" level=info msg="Checking pod default_busybox-5bc68d56bd-zp72z for CNI network kindnet (type=ptp)"
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.782237459Z" level=info msg="Ran pod sandbox dfe520f4e7fba70d39c421c092fd4d19741dad9d46aa68f234ad5e36d55185da with infra container: default/busybox-5bc68d56bd-zp72z/POD" id=f8617997-a31f-4974-9ffe-2a8157f968fa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.783490554Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=fb0b0274-44a9-43a0-9889-e9cc54894d5f name=/runtime.v1.ImageService/ImageStatus
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.783782718Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=fb0b0274-44a9-43a0-9889-e9cc54894d5f name=/runtime.v1.ImageService/ImageStatus
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.784634607Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=b16312ae-b9ba-49dc-9c63-975e2b31a9b9 name=/runtime.v1.ImageService/PullImage
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.787518525Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 27 23:45:17 multinode-595051 crio[960]: time="2023-11-27 23:45:17.940804771Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 27 23:45:18 multinode-595051 crio[960]: time="2023-11-27 23:45:18.485333757Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=b16312ae-b9ba-49dc-9c63-975e2b31a9b9 name=/runtime.v1.ImageService/PullImage
	Nov 27 23:45:18 multinode-595051 crio[960]: time="2023-11-27 23:45:18.486426871Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=6c5f4797-4cfc-4d11-949e-b34b5a234d18 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 23:45:18 multinode-595051 crio[960]: time="2023-11-27 23:45:18.487062215Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6c5f4797-4cfc-4d11-949e-b34b5a234d18 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 23:45:18 multinode-595051 crio[960]: time="2023-11-27 23:45:18.487776260Z" level=info msg="Creating container: default/busybox-5bc68d56bd-zp72z/busybox" id=06ffa757-cc93-4bac-bd53-3ad6f5ba06c4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 23:45:18 multinode-595051 crio[960]: time="2023-11-27 23:45:18.487858115Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 23:45:18 multinode-595051 crio[960]: time="2023-11-27 23:45:18.550627682Z" level=info msg="Created container 4c467598b8701b0145088a23416fdc8ea805b2f7817ed57e2fb1079fa0abac96: default/busybox-5bc68d56bd-zp72z/busybox" id=06ffa757-cc93-4bac-bd53-3ad6f5ba06c4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 23:45:18 multinode-595051 crio[960]: time="2023-11-27 23:45:18.551298374Z" level=info msg="Starting container: 4c467598b8701b0145088a23416fdc8ea805b2f7817ed57e2fb1079fa0abac96" id=32a43d67-60f7-4cb5-bd55-726e591a6524 name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 23:45:18 multinode-595051 crio[960]: time="2023-11-27 23:45:18.559379085Z" level=info msg="Started container" PID=2507 containerID=4c467598b8701b0145088a23416fdc8ea805b2f7817ed57e2fb1079fa0abac96 description=default/busybox-5bc68d56bd-zp72z/busybox id=32a43d67-60f7-4cb5-bd55-726e591a6524 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfe520f4e7fba70d39c421c092fd4d19741dad9d46aa68f234ad5e36d55185da
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4c467598b8701       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   dfe520f4e7fba       busybox-5bc68d56bd-zp72z
	f67423c183c48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      42 seconds ago       Running             storage-provisioner       0                   3d99be98b7bfb       storage-provisioner
	40e51b6466412       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      43 seconds ago       Running             coredns                   0                   f6b21d5d6f8a8       coredns-5dd5756b68-px5k6
	38ac33bd81008       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      44 seconds ago       Running             kindnet-cni               0                   91fbfffef0db5       kindnet-2hchr
	a5ea7b1ef2aaa       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      44 seconds ago       Running             kube-proxy                0                   eaac9b3b70142       kube-proxy-gjwvt
	51789b0804b01       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   06d44f2aee1ba       kube-controller-manager-multinode-595051
	0b79bec1eef50       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   5e8516a5656ea       kube-apiserver-multinode-595051
	b923c3eb2e262       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   25735c3e1f88e       kube-scheduler-multinode-595051
	2f7bd23c1a41a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   d6204a24d619e       etcd-multinode-595051
	
	* 
	* ==> coredns [40e51b6466412aa82b256daa18ad4fbd9a53d0a4dafcd4098451bb7646b16511] <==
	* [INFO] 10.244.1.2:55530 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086781s
	[INFO] 10.244.0.3:48308 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101877s
	[INFO] 10.244.0.3:41130 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001709224s
	[INFO] 10.244.0.3:38662 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084795s
	[INFO] 10.244.0.3:56170 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066015s
	[INFO] 10.244.0.3:54121 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001123212s
	[INFO] 10.244.0.3:55285 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058763s
	[INFO] 10.244.0.3:42971 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053509s
	[INFO] 10.244.0.3:56516 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057923s
	[INFO] 10.244.1.2:35265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001453s
	[INFO] 10.244.1.2:41783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088013s
	[INFO] 10.244.1.2:50354 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053826s
	[INFO] 10.244.1.2:50454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045999s
	[INFO] 10.244.0.3:40648 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105571s
	[INFO] 10.244.0.3:36406 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069003s
	[INFO] 10.244.0.3:43285 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055548s
	[INFO] 10.244.0.3:44587 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048106s
	[INFO] 10.244.1.2:42672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113135s
	[INFO] 10.244.1.2:42457 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00013511s
	[INFO] 10.244.1.2:52231 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120278s
	[INFO] 10.244.1.2:36433 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085776s
	[INFO] 10.244.0.3:51123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101586s
	[INFO] 10.244.0.3:42507 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096256s
	[INFO] 10.244.0.3:60990 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064589s
	[INFO] 10.244.0.3:38985 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058251s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-595051
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-595051
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=multinode-595051
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_44_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:44:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-595051
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:45:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:44:38 +0000   Mon, 27 Nov 2023 23:44:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:44:38 +0000   Mon, 27 Nov 2023 23:44:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:44:38 +0000   Mon, 27 Nov 2023 23:44:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:44:38 +0000   Mon, 27 Nov 2023 23:44:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-595051
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 929ef90254744c6a9ece73cd8d165ba8
	  System UUID:                74d7679e-fb2e-4c4c-8cdd-03cd4a4a1bf4
	  Boot ID:                    ccf6e8a7-9afe-448c-b481-9ad79744adaf
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zp72z                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-px5k6                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     45s
	  kube-system                 etcd-multinode-595051                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 kindnet-2hchr                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      45s
	  kube-system                 kube-apiserver-multinode-595051             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-controller-manager-multinode-595051    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-proxy-gjwvt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-scheduler-multinode-595051             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 44s   kube-proxy       
	  Normal  Starting                 58s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s   kubelet          Node multinode-595051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s   kubelet          Node multinode-595051 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s   kubelet          Node multinode-595051 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s   node-controller  Node multinode-595051 event: Registered Node multinode-595051 in Controller
	  Normal  NodeReady                44s   kubelet          Node multinode-595051 status is now: NodeReady
	
	
	Name:               multinode-595051-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-595051-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:44:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-595051-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:45:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:45:14 +0000   Mon, 27 Nov 2023 23:44:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:45:14 +0000   Mon, 27 Nov 2023 23:44:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:45:14 +0000   Mon, 27 Nov 2023 23:44:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:45:14 +0000   Mon, 27 Nov 2023 23:45:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-595051-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb1de7270b7841d6aec8d4136232e7b0
	  System UUID:                2ed75d09-4196-4b11-9dec-59aa1c2fe859
	  Boot ID:                    ccf6e8a7-9afe-448c-b481-9ad79744adaf
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8pbpd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-rdsw2               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      25s
	  kube-system                 kube-proxy-hsm4c            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  25s (x5 over 26s)  kubelet          Node multinode-595051-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x5 over 26s)  kubelet          Node multinode-595051-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x5 over 26s)  kubelet          Node multinode-595051-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s                node-controller  Node multinode-595051-m02 event: Registered Node multinode-595051-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-595051-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004918] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006765] FS-Cache: N-cookie d=000000005a04be6d{9p.inode} n=00000000c65f5879
	[  +0.008731] FS-Cache: N-key=[8] '8fa00f0200000000'
	[  +0.264322] FS-Cache: Duplicate cookie detected
	[  +0.004664] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006733] FS-Cache: O-cookie d=000000005a04be6d{9p.inode} n=00000000b6e3f4db
	[  +0.007353] FS-Cache: O-key=[8] '97a00f0200000000'
	[  +0.004961] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007979] FS-Cache: N-cookie d=000000005a04be6d{9p.inode} n=00000000bcfa7cd3
	[  +0.008706] FS-Cache: N-key=[8] '97a00f0200000000'
	[  +4.383695] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov27 23:36] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[  +1.007864] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[  +2.015744] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[  +4.159578] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[  +8.191130] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[ +16.126345] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	[Nov27 23:37] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 72 26 ac bd 04 fd 4a e6 e3 33 da f7 08 00
	
	* 
	* ==> etcd [2f7bd23c1a41a6deacbd69aa68a56fffc2fd16c6b314bfdb49d9b856fa796fd6] <==
	* {"level":"info","ts":"2023-11-27T23:44:19.56381Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-27T23:44:19.564052Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-27T23:44:19.564086Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-27T23:44:19.951739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-27T23:44:19.951884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-27T23:44:19.951927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-27T23:44:19.95195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-27T23:44:19.951958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-27T23:44:19.951971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-27T23:44:19.951982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-27T23:44:19.952841Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:44:19.953498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:44:19.953496Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-595051 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-27T23:44:19.953536Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:44:19.953791Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-27T23:44:19.953855Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:44:19.954018Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:44:19.954096Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:44:19.953891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-27T23:44:19.954733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-27T23:44:19.954912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-27T23:44:50.730867Z","caller":"traceutil/trace.go:171","msg":"trace[1739709427] linearizableReadLoop","detail":"{readStateIndex:440; appliedIndex:439; }","duration":"119.04799ms","start":"2023-11-27T23:44:50.611797Z","end":"2023-11-27T23:44:50.730845Z","steps":["trace[1739709427] 'read index received'  (duration: 118.936316ms)","trace[1739709427] 'applied index is now lower than readState.Index'  (duration: 110.827µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T23:44:50.730959Z","caller":"traceutil/trace.go:171","msg":"trace[621023051] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"186.885104ms","start":"2023-11-27T23:44:50.544054Z","end":"2023-11-27T23:44:50.730939Z","steps":["trace[621023051] 'process raft request'  (duration: 186.669214ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:44:50.730996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.199106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-27T23:44:50.731048Z","caller":"traceutil/trace.go:171","msg":"trace[1215954561] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:422; }","duration":"119.274071ms","start":"2023-11-27T23:44:50.611765Z","end":"2023-11-27T23:44:50.731039Z","steps":["trace[1215954561] 'agreement among raft nodes before linearized reading'  (duration: 119.173129ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:45:23 up 27 min,  0 users,  load average: 1.16, 1.74, 1.19
	Linux multinode-595051 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [38ac33bd8100869adab24d8473d5aeb724fd98003e275555c4be109e0c567be1] <==
	* I1127 23:44:38.150436       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1127 23:44:38.150505       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1127 23:44:38.150665       1 main.go:116] setting mtu 1500 for CNI 
	I1127 23:44:38.150685       1 main.go:146] kindnetd IP family: "ipv4"
	I1127 23:44:38.150714       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1127 23:44:38.452661       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:44:38.452698       1 main.go:227] handling current node
	I1127 23:44:48.553559       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:44:48.553585       1 main.go:227] handling current node
	I1127 23:44:58.565376       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:44:58.565401       1 main.go:227] handling current node
	I1127 23:44:58.565410       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 23:44:58.565415       1 main.go:250] Node multinode-595051-m02 has CIDR [10.244.1.0/24] 
	I1127 23:44:58.565579       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1127 23:45:08.577897       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:45:08.577921       1 main.go:227] handling current node
	I1127 23:45:08.577930       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 23:45:08.577935       1 main.go:250] Node multinode-595051-m02 has CIDR [10.244.1.0/24] 
	I1127 23:45:18.581656       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:45:18.581680       1 main.go:227] handling current node
	I1127 23:45:18.581689       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 23:45:18.581694       1 main.go:250] Node multinode-595051-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [0b79bec1eef507cd6c999f89abe9137f6ef828e5a098739ced15b622587d551d] <==
	* I1127 23:44:21.748136       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1127 23:44:21.772792       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1127 23:44:21.845703       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1127 23:44:21.846013       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1127 23:44:21.846919       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1127 23:44:21.846940       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1127 23:44:21.847443       1 controller.go:624] quota admission added evaluator for: namespaces
	I1127 23:44:21.847995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1127 23:44:21.848250       1 cache.go:39] Caches are synced for autoregister controller
	I1127 23:44:21.942423       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 23:44:22.650295       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1127 23:44:22.654389       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1127 23:44:22.654408       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1127 23:44:23.048776       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 23:44:23.082560       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1127 23:44:23.156021       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1127 23:44:23.161458       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1127 23:44:23.162468       1 controller.go:624] quota admission added evaluator for: endpoints
	I1127 23:44:23.166862       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1127 23:44:23.684754       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1127 23:44:24.552990       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1127 23:44:24.562084       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1127 23:44:24.571510       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1127 23:44:37.452313       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1127 23:44:37.452804       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [51789b0804b016bd4343d0696030252a5a02192eaa6e1316fc807a111e8a174c] <==
	* I1127 23:44:37.780171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.148µs"
	I1127 23:44:39.010827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.919µs"
	I1127 23:44:39.021428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.46µs"
	I1127 23:44:39.863196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.994µs"
	I1127 23:44:39.881876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.030895ms"
	I1127 23:44:39.882024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.676µs"
	I1127 23:44:41.943602       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1127 23:44:57.746843       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-595051-m02\" does not exist"
	I1127 23:44:57.755712       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hsm4c"
	I1127 23:44:57.757389       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rdsw2"
	I1127 23:44:57.761934       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-595051-m02" podCIDRs=["10.244.1.0/24"]
	I1127 23:45:01.947021       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-595051-m02"
	I1127 23:45:01.947069       1 event.go:307] "Event occurred" object="multinode-595051-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-595051-m02 event: Registered Node multinode-595051-m02 in Controller"
	I1127 23:45:14.997414       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-595051-m02"
	I1127 23:45:17.411157       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1127 23:45:17.420915       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8pbpd"
	I1127 23:45:17.424610       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-zp72z"
	I1127 23:45:17.430204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.299105ms"
	I1127 23:45:17.440197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.931307ms"
	I1127 23:45:17.440321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="80.309µs"
	I1127 23:45:17.441666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.775µs"
	I1127 23:45:18.939608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.778357ms"
	I1127 23:45:18.939679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.209µs"
	I1127 23:45:19.307031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.107655ms"
	I1127 23:45:19.307108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="40.588µs"
	
	* 
	* ==> kube-proxy [a5ea7b1ef2aaa6e714c05f71bcd66038f91b8fb6e3a2a2507347ebf1b50854b0] <==
	* I1127 23:44:38.162467       1 server_others.go:69] "Using iptables proxy"
	I1127 23:44:38.171476       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1127 23:44:38.190002       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1127 23:44:38.191891       1 server_others.go:152] "Using iptables Proxier"
	I1127 23:44:38.191924       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1127 23:44:38.191935       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1127 23:44:38.191964       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 23:44:38.192156       1 server.go:846] "Version info" version="v1.28.4"
	I1127 23:44:38.192171       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 23:44:38.192762       1 config.go:315] "Starting node config controller"
	I1127 23:44:38.192786       1 config.go:97] "Starting endpoint slice config controller"
	I1127 23:44:38.192799       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 23:44:38.192807       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 23:44:38.192870       1 config.go:188] "Starting service config controller"
	I1127 23:44:38.192942       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 23:44:38.293515       1 shared_informer.go:318] Caches are synced for service config
	I1127 23:44:38.293568       1 shared_informer.go:318] Caches are synced for node config
	I1127 23:44:38.293545       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [b923c3eb2e26228c6e57d4e22a5213786f6b92ea21a6a1e1443cc818022e5b95] <==
	* E1127 23:44:21.843348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1127 23:44:21.843349       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:44:21.843355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1127 23:44:21.843368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1127 23:44:21.843282       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 23:44:21.843351       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1127 23:44:21.843425       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:44:21.843435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:44:21.843440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:44:21.843404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 23:44:21.843480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 23:44:21.843498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1127 23:44:22.732841       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:44:22.732885       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1127 23:44:22.781739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:44:22.781774       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1127 23:44:22.803037       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:44:22.803100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1127 23:44:22.817434       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:44:22.817465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1127 23:44:22.841737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1127 23:44:22.841770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1127 23:44:22.888678       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 23:44:22.888715       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1127 23:44:23.263887       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 27 23:44:37 multinode-595051 kubelet[1600]: I1127 23:44:37.549999    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tvp\" (UniqueName: \"kubernetes.io/projected/30497a11-9440-4749-bf2b-d01df4f4b9b9-kube-api-access-t7tvp\") pod \"kindnet-2hchr\" (UID: \"30497a11-9440-4749-bf2b-d01df4f4b9b9\") " pod="kube-system/kindnet-2hchr"
	Nov 27 23:44:37 multinode-595051 kubelet[1600]: I1127 23:44:37.550032    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b33c9c4-54cf-49e4-a97a-d782fa80c2d8-xtables-lock\") pod \"kube-proxy-gjwvt\" (UID: \"9b33c9c4-54cf-49e4-a97a-d782fa80c2d8\") " pod="kube-system/kube-proxy-gjwvt"
	Nov 27 23:44:37 multinode-595051 kubelet[1600]: I1127 23:44:37.550084    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgglb\" (UniqueName: \"kubernetes.io/projected/9b33c9c4-54cf-49e4-a97a-d782fa80c2d8-kube-api-access-fgglb\") pod \"kube-proxy-gjwvt\" (UID: \"9b33c9c4-54cf-49e4-a97a-d782fa80c2d8\") " pod="kube-system/kube-proxy-gjwvt"
	Nov 27 23:44:37 multinode-595051 kubelet[1600]: I1127 23:44:37.550124    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b33c9c4-54cf-49e4-a97a-d782fa80c2d8-lib-modules\") pod \"kube-proxy-gjwvt\" (UID: \"9b33c9c4-54cf-49e4-a97a-d782fa80c2d8\") " pod="kube-system/kube-proxy-gjwvt"
	Nov 27 23:44:37 multinode-595051 kubelet[1600]: I1127 23:44:37.550156    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30497a11-9440-4749-bf2b-d01df4f4b9b9-xtables-lock\") pod \"kindnet-2hchr\" (UID: \"30497a11-9440-4749-bf2b-d01df4f4b9b9\") " pod="kube-system/kindnet-2hchr"
	Nov 27 23:44:37 multinode-595051 kubelet[1600]: I1127 23:44:37.550186    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b33c9c4-54cf-49e4-a97a-d782fa80c2d8-kube-proxy\") pod \"kube-proxy-gjwvt\" (UID: \"9b33c9c4-54cf-49e4-a97a-d782fa80c2d8\") " pod="kube-system/kube-proxy-gjwvt"
	Nov 27 23:44:37 multinode-595051 kubelet[1600]: W1127 23:44:37.903042    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/crio-eaac9b3b70142a59b9933e12c312e78cb7badedf6802eb758bef38112d7ac353 WatchSource:0}: Error finding container eaac9b3b70142a59b9933e12c312e78cb7badedf6802eb758bef38112d7ac353: Status 404 returned error can't find the container with id eaac9b3b70142a59b9933e12c312e78cb7badedf6802eb758bef38112d7ac353
	Nov 27 23:44:37 multinode-595051 kubelet[1600]: W1127 23:44:37.903319    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/crio-91fbfffef0db510c0d8d5b6caa00772de66b7cbd706de34fbb2166370075663d WatchSource:0}: Error finding container 91fbfffef0db510c0d8d5b6caa00772de66b7cbd706de34fbb2166370075663d: Status 404 returned error can't find the container with id 91fbfffef0db510c0d8d5b6caa00772de66b7cbd706de34fbb2166370075663d
	Nov 27 23:44:38 multinode-595051 kubelet[1600]: I1127 23:44:38.870343    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2hchr" podStartSLOduration=1.870289514 podCreationTimestamp="2023-11-27 23:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:44:38.858661223 +0000 UTC m=+14.329575069" watchObservedRunningTime="2023-11-27 23:44:38.870289514 +0000 UTC m=+14.341203361"
	Nov 27 23:44:38 multinode-595051 kubelet[1600]: I1127 23:44:38.992032    1600 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 27 23:44:39 multinode-595051 kubelet[1600]: I1127 23:44:39.010770    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gjwvt" podStartSLOduration=2.010717926 podCreationTimestamp="2023-11-27 23:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:44:38.870417286 +0000 UTC m=+14.341331131" watchObservedRunningTime="2023-11-27 23:44:39.010717926 +0000 UTC m=+14.481631769"
	Nov 27 23:44:39 multinode-595051 kubelet[1600]: I1127 23:44:39.011179    1600 topology_manager.go:215] "Topology Admit Handler" podUID="31070a53-8a76-42ef-ba74-254dc4e13178" podNamespace="kube-system" podName="coredns-5dd5756b68-px5k6"
	Nov 27 23:44:39 multinode-595051 kubelet[1600]: I1127 23:44:39.060450    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31070a53-8a76-42ef-ba74-254dc4e13178-config-volume\") pod \"coredns-5dd5756b68-px5k6\" (UID: \"31070a53-8a76-42ef-ba74-254dc4e13178\") " pod="kube-system/coredns-5dd5756b68-px5k6"
	Nov 27 23:44:39 multinode-595051 kubelet[1600]: I1127 23:44:39.060504    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxm9d\" (UniqueName: \"kubernetes.io/projected/31070a53-8a76-42ef-ba74-254dc4e13178-kube-api-access-cxm9d\") pod \"coredns-5dd5756b68-px5k6\" (UID: \"31070a53-8a76-42ef-ba74-254dc4e13178\") " pod="kube-system/coredns-5dd5756b68-px5k6"
	Nov 27 23:44:39 multinode-595051 kubelet[1600]: W1127 23:44:39.350842    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/crio-f6b21d5d6f8a810810c8e096734d3d1895dbae2c5f55be47904621c0f915ab78 WatchSource:0}: Error finding container f6b21d5d6f8a810810c8e096734d3d1895dbae2c5f55be47904621c0f915ab78: Status 404 returned error can't find the container with id f6b21d5d6f8a810810c8e096734d3d1895dbae2c5f55be47904621c0f915ab78
	Nov 27 23:44:39 multinode-595051 kubelet[1600]: I1127 23:44:39.863281    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-px5k6" podStartSLOduration=2.863230082 podCreationTimestamp="2023-11-27 23:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:44:39.862917159 +0000 UTC m=+15.333831002" watchObservedRunningTime="2023-11-27 23:44:39.863230082 +0000 UTC m=+15.334143926"
	Nov 27 23:44:40 multinode-595051 kubelet[1600]: I1127 23:44:40.075342    1600 topology_manager.go:215] "Topology Admit Handler" podUID="4321beea-2377-49ee-947f-88a6473310ea" podNamespace="kube-system" podName="storage-provisioner"
	Nov 27 23:44:40 multinode-595051 kubelet[1600]: I1127 23:44:40.167544    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b6nc\" (UniqueName: \"kubernetes.io/projected/4321beea-2377-49ee-947f-88a6473310ea-kube-api-access-9b6nc\") pod \"storage-provisioner\" (UID: \"4321beea-2377-49ee-947f-88a6473310ea\") " pod="kube-system/storage-provisioner"
	Nov 27 23:44:40 multinode-595051 kubelet[1600]: I1127 23:44:40.167604    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4321beea-2377-49ee-947f-88a6473310ea-tmp\") pod \"storage-provisioner\" (UID: \"4321beea-2377-49ee-947f-88a6473310ea\") " pod="kube-system/storage-provisioner"
	Nov 27 23:44:40 multinode-595051 kubelet[1600]: W1127 23:44:40.414803    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/crio-3d99be98b7bfb01a9e22ecb225bbe17bd391895952df5bd88bb8408b8eb811b5 WatchSource:0}: Error finding container 3d99be98b7bfb01a9e22ecb225bbe17bd391895952df5bd88bb8408b8eb811b5: Status 404 returned error can't find the container with id 3d99be98b7bfb01a9e22ecb225bbe17bd391895952df5bd88bb8408b8eb811b5
	Nov 27 23:44:44 multinode-595051 kubelet[1600]: I1127 23:44:44.762524    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.762471435 podCreationTimestamp="2023-11-27 23:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:44:40.86448474 +0000 UTC m=+16.335398585" watchObservedRunningTime="2023-11-27 23:44:44.762471435 +0000 UTC m=+20.233385278"
	Nov 27 23:45:17 multinode-595051 kubelet[1600]: I1127 23:45:17.430710    1600 topology_manager.go:215] "Topology Admit Handler" podUID="0e969491-7053-43be-b692-85897f7451d4" podNamespace="default" podName="busybox-5bc68d56bd-zp72z"
	Nov 27 23:45:17 multinode-595051 kubelet[1600]: I1127 23:45:17.498229    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2nth\" (UniqueName: \"kubernetes.io/projected/0e969491-7053-43be-b692-85897f7451d4-kube-api-access-k2nth\") pod \"busybox-5bc68d56bd-zp72z\" (UID: \"0e969491-7053-43be-b692-85897f7451d4\") " pod="default/busybox-5bc68d56bd-zp72z"
	Nov 27 23:45:17 multinode-595051 kubelet[1600]: W1127 23:45:17.778985    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/crio-dfe520f4e7fba70d39c421c092fd4d19741dad9d46aa68f234ad5e36d55185da WatchSource:0}: Error finding container dfe520f4e7fba70d39c421c092fd4d19741dad9d46aa68f234ad5e36d55185da: Status 404 returned error can't find the container with id dfe520f4e7fba70d39c421c092fd4d19741dad9d46aa68f234ad5e36d55185da
	Nov 27 23:45:18 multinode-595051 kubelet[1600]: I1127 23:45:18.935914    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-zp72z" podStartSLOduration=1.233993987 podCreationTimestamp="2023-11-27 23:45:17 +0000 UTC" firstStartedPulling="2023-11-27 23:45:17.783965892 +0000 UTC m=+53.254879727" lastFinishedPulling="2023-11-27 23:45:18.485843903 +0000 UTC m=+53.956757745" observedRunningTime="2023-11-27 23:45:18.93568549 +0000 UTC m=+54.406599347" watchObservedRunningTime="2023-11-27 23:45:18.935872005 +0000 UTC m=+54.406785849"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-595051 -n multinode-595051
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-595051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.358980684.exe start -p running-upgrade-902052 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.358980684.exe start -p running-upgrade-902052 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m7.500670826s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-902052 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-902052 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.734222413s)

                                                
                                                
-- stdout --
	* [running-upgrade-902052] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-902052 in cluster running-upgrade-902052
	* Pulling base image ...
	* Updating the running docker "running-upgrade-902052" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:57:12.226632  177633 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:57:12.226934  177633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:57:12.226944  177633 out.go:309] Setting ErrFile to fd 2...
	I1127 23:57:12.226948  177633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:57:12.227195  177633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:57:12.227785  177633 out.go:303] Setting JSON to false
	I1127 23:57:12.229140  177633 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2384,"bootTime":1701127048,"procs":470,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:57:12.229209  177633 start.go:138] virtualization: kvm guest
	I1127 23:57:12.231964  177633 out.go:177] * [running-upgrade-902052] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:57:12.234005  177633 notify.go:220] Checking for updates...
	I1127 23:57:12.234032  177633 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:57:12.236868  177633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:57:12.238642  177633 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:57:12.240371  177633 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:57:12.242173  177633 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:57:12.243716  177633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:57:12.245787  177633 config.go:182] Loaded profile config "running-upgrade-902052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 23:57:12.245816  177633 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:57:12.248309  177633 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1127 23:57:12.249857  177633 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:57:12.279934  177633 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:57:12.280049  177633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:57:12.358425  177633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:69 SystemTime:2023-11-27 23:57:12.348127085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:57:12.358517  177633 docker.go:295] overlay module found
	I1127 23:57:12.361646  177633 out.go:177] * Using the docker driver based on existing profile
	I1127 23:57:12.363265  177633 start.go:298] selected driver: docker
	I1127 23:57:12.363288  177633 start.go:902] validating driver "docker" against &{Name:running-upgrade-902052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-902052 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1127 23:57:12.363393  177633 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:57:12.368376  177633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:57:12.446871  177633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:69 SystemTime:2023-11-27 23:57:12.435526934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:57:12.447236  177633 cni.go:84] Creating CNI manager for ""
	I1127 23:57:12.447265  177633 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1127 23:57:12.447291  177633 start_flags.go:323] config:
	{Name:running-upgrade-902052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-902052 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1127 23:57:12.449572  177633 out.go:177] * Starting control plane node running-upgrade-902052 in cluster running-upgrade-902052
	I1127 23:57:12.451187  177633 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:57:12.452566  177633 out.go:177] * Pulling base image ...
	I1127 23:57:12.455436  177633 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1127 23:57:12.455572  177633 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:57:12.475820  177633 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:57:12.475850  177633 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	W1127 23:57:12.494550  177633 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1127 23:57:12.494695  177633 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/running-upgrade-902052/config.json ...
	I1127 23:57:12.494786  177633 cache.go:107] acquiring lock: {Name:mk9e92729a49752bfd048d6b7ac6eb2904673dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.494865  177633 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1127 23:57:12.494872  177633 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.852µs
	I1127 23:57:12.494880  177633 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1127 23:57:12.494890  177633 cache.go:107] acquiring lock: {Name:mk5920c65f24682bffd31b8b5858c01ccdbe921b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.494913  177633 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1127 23:57:12.494912  177633 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:57:12.494917  177633 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 29.594µs
	I1127 23:57:12.494923  177633 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1127 23:57:12.494931  177633 cache.go:107] acquiring lock: {Name:mkf7652393bb754d7dcb51096be824a1dd596f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.494940  177633 start.go:365] acquiring machines lock for running-upgrade-902052: {Name:mkab5d2c37b94f4e2f3f79c2aba2f38d55f12ae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.494955  177633 cache.go:107] acquiring lock: {Name:mka77dd2e822e83f3b6b8ee9c876858ae2b332d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.495000  177633 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1127 23:57:12.495003  177633 start.go:369] acquired machines lock for "running-upgrade-902052" in 47.872µs
	I1127 23:57:12.495005  177633 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 49.895µs
	I1127 23:57:12.495011  177633 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1127 23:57:12.495016  177633 start.go:96] Skipping create...Using existing machine configuration
	I1127 23:57:12.495022  177633 fix.go:54] fixHost starting: m01
	I1127 23:57:12.495020  177633 cache.go:107] acquiring lock: {Name:mk09e71bc17b5abe067eb1ba3aea787566fa7949 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.495039  177633 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1127 23:57:12.495043  177633 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 24.284µs
	I1127 23:57:12.495048  177633 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1127 23:57:12.495057  177633 cache.go:107] acquiring lock: {Name:mkd363673b293eb97ca614b8ed2bb456e60ce5f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.495074  177633 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1127 23:57:12.495078  177633 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 22.804µs
	I1127 23:57:12.495085  177633 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1127 23:57:12.495091  177633 cache.go:107] acquiring lock: {Name:mkd0f39bfd5d03d654698430841c64949c3705b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.495111  177633 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1127 23:57:12.495115  177633 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 24.471µs
	I1127 23:57:12.495119  177633 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1127 23:57:12.495125  177633 cache.go:107] acquiring lock: {Name:mk1c9582307db87837ce900a4211d899eb4c9293 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:57:12.495143  177633 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1127 23:57:12.495146  177633 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 21.612µs
	I1127 23:57:12.495151  177633 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1127 23:57:12.494949  177633 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1127 23:57:12.495157  177633 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 227.632µs
	I1127 23:57:12.495162  177633 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1127 23:57:12.495166  177633 cache.go:87] Successfully saved all images to host disk.
	I1127 23:57:12.495289  177633 cli_runner.go:164] Run: docker container inspect running-upgrade-902052 --format={{.State.Status}}
	I1127 23:57:12.519380  177633 fix.go:102] recreateIfNeeded on running-upgrade-902052: state=Running err=<nil>
	W1127 23:57:12.519424  177633 fix.go:128] unexpected machine state, will restart: <nil>
	I1127 23:57:12.522485  177633 out.go:177] * Updating the running docker "running-upgrade-902052" container ...
	I1127 23:57:12.524555  177633 machine.go:88] provisioning docker machine ...
	I1127 23:57:12.524595  177633 ubuntu.go:169] provisioning hostname "running-upgrade-902052"
	I1127 23:57:12.524674  177633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-902052
	I1127 23:57:12.563945  177633 main.go:141] libmachine: Using SSH client type: native
	I1127 23:57:12.564510  177633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32936 <nil> <nil>}
	I1127 23:57:12.564538  177633 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-902052 && echo "running-upgrade-902052" | sudo tee /etc/hostname
	I1127 23:57:12.698868  177633 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-902052
	
	I1127 23:57:12.698962  177633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-902052
	I1127 23:57:12.719592  177633 main.go:141] libmachine: Using SSH client type: native
	I1127 23:57:12.720046  177633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32936 <nil> <nil>}
	I1127 23:57:12.720073  177633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-902052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-902052/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-902052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:57:12.838935  177633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:57:12.838963  177633 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4554/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4554/.minikube}
	I1127 23:57:12.838998  177633 ubuntu.go:177] setting up certificates
	I1127 23:57:12.839011  177633 provision.go:83] configureAuth start
	I1127 23:57:12.839071  177633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-902052
	I1127 23:57:12.859746  177633 provision.go:138] copyHostCerts
	I1127 23:57:12.859802  177633 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem, removing ...
	I1127 23:57:12.859809  177633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem
	I1127 23:57:12.859867  177633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem (1078 bytes)
	I1127 23:57:12.859987  177633 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem, removing ...
	I1127 23:57:12.859995  177633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem
	I1127 23:57:12.860022  177633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem (1123 bytes)
	I1127 23:57:12.860090  177633 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem, removing ...
	I1127 23:57:12.860097  177633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem
	I1127 23:57:12.860120  177633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem (1679 bytes)
	I1127 23:57:12.860174  177633 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-902052 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-902052]
	I1127 23:57:13.265868  177633 provision.go:172] copyRemoteCerts
	I1127 23:57:13.265936  177633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:57:13.265980  177633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-902052
	I1127 23:57:13.286354  177633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/running-upgrade-902052/id_rsa Username:docker}
	I1127 23:57:13.373599  177633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:57:13.393192  177633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1127 23:57:13.412971  177633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:57:13.441098  177633 provision.go:86] duration metric: configureAuth took 602.073404ms
	I1127 23:57:13.441132  177633 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:57:13.441340  177633 config.go:182] Loaded profile config "running-upgrade-902052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 23:57:13.441472  177633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-902052
	I1127 23:57:13.462817  177633 main.go:141] libmachine: Using SSH client type: native
	I1127 23:57:13.463215  177633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32936 <nil> <nil>}
	I1127 23:57:13.463239  177633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:57:13.916494  177633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:57:13.916519  177633 machine.go:91] provisioned docker machine in 1.391945084s
	I1127 23:57:13.916529  177633 start.go:300] post-start starting for "running-upgrade-902052" (driver="docker")
	I1127 23:57:13.916539  177633 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:57:13.916587  177633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:57:13.916624  177633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-902052
	I1127 23:57:13.934021  177633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/running-upgrade-902052/id_rsa Username:docker}
	I1127 23:57:14.021573  177633 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:57:14.024691  177633 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:57:14.024719  177633 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:57:14.024728  177633 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:57:14.024734  177633 info.go:137] Remote host: Ubuntu 19.10
	I1127 23:57:14.024743  177633 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/addons for local assets ...
	I1127 23:57:14.024802  177633 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/files for local assets ...
	I1127 23:57:14.024874  177633 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> 113062.pem in /etc/ssl/certs
	I1127 23:57:14.024952  177633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:57:14.031837  177633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem --> /etc/ssl/certs/113062.pem (1708 bytes)
	I1127 23:57:14.054913  177633 start.go:303] post-start completed in 138.368453ms
	I1127 23:57:14.054998  177633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:57:14.055046  177633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-902052
	I1127 23:57:14.074892  177633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/running-upgrade-902052/id_rsa Username:docker}
	I1127 23:57:14.159894  177633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:57:14.164572  177633 fix.go:56] fixHost completed within 1.66954314s
	I1127 23:57:14.164644  177633 start.go:83] releasing machines lock for "running-upgrade-902052", held for 1.669632412s
	I1127 23:57:14.164761  177633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-902052
	I1127 23:57:14.190916  177633 ssh_runner.go:195] Run: cat /version.json
	I1127 23:57:14.190967  177633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-902052
	I1127 23:57:14.191052  177633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:57:14.191120  177633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-902052
	I1127 23:57:14.209629  177633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/running-upgrade-902052/id_rsa Username:docker}
	I1127 23:57:14.211216  177633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32936 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/running-upgrade-902052/id_rsa Username:docker}
	W1127 23:57:14.322268  177633 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1127 23:57:14.322348  177633 ssh_runner.go:195] Run: systemctl --version
	I1127 23:57:14.326425  177633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:57:14.392695  177633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:57:14.400403  177633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:57:14.418287  177633 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:57:14.418359  177633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:57:14.443134  177633 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 23:57:14.443162  177633 start.go:472] detecting cgroup driver to use...
	I1127 23:57:14.443210  177633 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:57:14.443257  177633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:57:14.472887  177633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:57:14.483691  177633 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:57:14.483777  177633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:57:14.495891  177633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:57:14.507981  177633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1127 23:57:14.520676  177633 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1127 23:57:14.520737  177633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:57:14.620709  177633 docker.go:219] disabling docker service ...
	I1127 23:57:14.620776  177633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:57:14.633050  177633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:57:14.644971  177633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:57:14.741858  177633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:57:14.838388  177633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:57:14.849175  177633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:57:14.867876  177633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1127 23:57:14.867938  177633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:57:14.885754  177633 out.go:177] 
	W1127 23:57:14.887514  177633 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1127 23:57:14.887534  177633 out.go:239] * 
	* 
	W1127 23:57:14.888479  177633 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1127 23:57:14.890604  177633 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-902052 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-27 23:57:14.917665139 +0000 UTC m=+1951.716993411
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-902052
helpers_test.go:235: (dbg) docker inspect running-upgrade-902052:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1fdad6716ca167c525c089bc57b82672bb716ff0dd9024b77d8ab6f09ae4fd1",
	        "Created": "2023-11-27T23:56:05.102646007Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 157776,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:56:06.491263494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/c1fdad6716ca167c525c089bc57b82672bb716ff0dd9024b77d8ab6f09ae4fd1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1fdad6716ca167c525c089bc57b82672bb716ff0dd9024b77d8ab6f09ae4fd1/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1fdad6716ca167c525c089bc57b82672bb716ff0dd9024b77d8ab6f09ae4fd1/hosts",
	        "LogPath": "/var/lib/docker/containers/c1fdad6716ca167c525c089bc57b82672bb716ff0dd9024b77d8ab6f09ae4fd1/c1fdad6716ca167c525c089bc57b82672bb716ff0dd9024b77d8ab6f09ae4fd1-json.log",
	        "Name": "/running-upgrade-902052",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-902052:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c3894d43f53decc9217e454a5c66d2e57c3ffe5a29f95f6545a93ec150f45f0-init/diff:/var/lib/docker/overlay2/bd517866d33eaf234e2a2695d95c77c3cc191e1e08805593e77b9d8530493259/diff:/var/lib/docker/overlay2/845c7903181b0371ed72efb734a4fd945d856ccb916c66cc14fddd072c1ede76/diff:/var/lib/docker/overlay2/24fbafb4bde5ea979a69d19c62ca78316e5535d49122505631f0038a840a83be/diff:/var/lib/docker/overlay2/67977fb92933bf5d6c5c879a04beb4e1710164040411201858ea9c846173dd99/diff:/var/lib/docker/overlay2/f1fbd55283b85c1d1df9be548a6d490f2e42c4ddb12614f333ce58b424e4e87e/diff:/var/lib/docker/overlay2/0e341f748980ffa5ebe4f9ef4412a80313e7f128b349785234393ed9d56c4a42/diff:/var/lib/docker/overlay2/1db0d4f3d7c328ce68dadceeda03b9089fe882c3f56bfbd73fba276d56079c6d/diff:/var/lib/docker/overlay2/5dbd7b17914a9f5888855e96f38aeff4d716870f4e6524c282de5903fbd13445/diff:/var/lib/docker/overlay2/6050c210635405aa11636bd0bfaa180d0ad97d7a09ff41af8dc38f70bcca4584/diff:/var/lib/docker/overlay2/8510d7
d29f70026e019c6bbf14385a5e16d62b764465604b94c000b128faeac5/diff:/var/lib/docker/overlay2/5d654dbc31024ff7d7842cbb9e31d90b10949ff4fa163c19941d5edce26d7c49/diff:/var/lib/docker/overlay2/96878a8ba8faa5503f8de689e38324e5259d5b18c1c906bce6594268bcdf8c77/diff:/var/lib/docker/overlay2/dc23bdae73b55b593892955687db2929842887b8a873f2fde6bb962d0671df55/diff:/var/lib/docker/overlay2/f943f144a3e654d35f41edd491a178cd4138a28da9947aa802a46a2a6594c5d6/diff:/var/lib/docker/overlay2/08f98c1ca720e94d3b29c581c8aa642e79a86a38be3e72db0700f5d0fd2e3873/diff:/var/lib/docker/overlay2/a528a4ceeee4cd02b29be173118a8279a099f2a3aa0414ba7e7805afeec16a88/diff:/var/lib/docker/overlay2/726b7ce11dbd18aeab770e2b27f2d4a3c11519ed1703d21505657527b3115f5a/diff:/var/lib/docker/overlay2/e73a373e98660d378020797a8f0e7a73b2bd17f956feb01c88834595d970d69d/diff:/var/lib/docker/overlay2/f43e29575448be75001f3c4c62facdb582569f8f0f43a9eac9030ff1ab500813/diff:/var/lib/docker/overlay2/0234581c107097f289f5270d13d57f4af1a8c7fa66e429297d62fbc5864f6d01/diff:/var/lib/d
ocker/overlay2/fae81056c9700ee2ff6cedb3d3464bea5d5c1c3fdec0a2252488146a9572e5b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c3894d43f53decc9217e454a5c66d2e57c3ffe5a29f95f6545a93ec150f45f0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c3894d43f53decc9217e454a5c66d2e57c3ffe5a29f95f6545a93ec150f45f0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c3894d43f53decc9217e454a5c66d2e57c3ffe5a29f95f6545a93ec150f45f0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-902052",
	                "Source": "/var/lib/docker/volumes/running-upgrade-902052/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-902052",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-902052",
	                "name.minikube.sigs.k8s.io": "running-upgrade-902052",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44520f14aad3c883d55ff2eca0aec9e05dbfac8ce0d97247ff47f891030290d5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32936"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32935"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32934"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44520f14aad3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "2108d42ba3b2dabc67c4e6ca1ff12225fb82ca3f0acf4d1b7e7294eeaefc9b55",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "05a8343b35c4492c19568cdd4caf62110dab79851957d8ceaf3a92472847433f",
	                    "EndpointID": "2108d42ba3b2dabc67c4e6ca1ff12225fb82ca3f0acf4d1b7e7294eeaefc9b55",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-902052 -n running-upgrade-902052
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-902052 -n running-upgrade-902052: exit status 4 (365.41719ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 23:57:15.256997  179854 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-902052" does not appear in /home/jenkins/minikube-integration/17206-4554/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-902052" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-902052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-902052
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-902052: (2.145645925s)
--- FAIL: TestRunningBinaryUpgrade (73.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (112.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.3015249122.exe start -p stopped-upgrade-211581 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1127 23:55:14.090828   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.3015249122.exe start -p stopped-upgrade-211581 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m32.628162461s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.3015249122.exe -p stopped-upgrade-211581 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.3015249122.exe -p stopped-upgrade-211581 stop: (13.890368023s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-211581 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-211581 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.524289132s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-211581] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-211581 in cluster stopped-upgrade-211581
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-211581" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:56:55.449210  170900 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:56:55.449367  170900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:56:55.449379  170900 out.go:309] Setting ErrFile to fd 2...
	I1127 23:56:55.449387  170900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:56:55.449693  170900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:56:55.450427  170900 out.go:303] Setting JSON to false
	I1127 23:56:55.451776  170900 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2368,"bootTime":1701127048,"procs":395,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:56:55.451837  170900 start.go:138] virtualization: kvm guest
	I1127 23:56:55.454863  170900 out.go:177] * [stopped-upgrade-211581] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:56:55.456329  170900 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:56:55.457709  170900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:56:55.456355  170900 notify.go:220] Checking for updates...
	I1127 23:56:55.461157  170900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:56:55.462409  170900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:56:55.463914  170900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:56:55.465380  170900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:56:55.467619  170900 config.go:182] Loaded profile config "stopped-upgrade-211581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 23:56:55.467643  170900 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:56:55.469415  170900 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1127 23:56:55.470946  170900 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:56:55.511803  170900 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:56:55.511901  170900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:56:55.589068  170900 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:79 SystemTime:2023-11-27 23:56:55.577815461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:56:55.589213  170900 docker.go:295] overlay module found
	I1127 23:56:55.591350  170900 out.go:177] * Using the docker driver based on existing profile
	I1127 23:56:55.593015  170900 start.go:298] selected driver: docker
	I1127 23:56:55.593036  170900 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-211581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-211581 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1127 23:56:55.593166  170900 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:56:55.594317  170900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:56:55.652560  170900 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:79 SystemTime:2023-11-27 23:56:55.642916977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:56:55.652862  170900 cni.go:84] Creating CNI manager for ""
	I1127 23:56:55.652886  170900 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1127 23:56:55.652895  170900 start_flags.go:323] config:
	{Name:stopped-upgrade-211581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-211581 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1127 23:56:55.654992  170900 out.go:177] * Starting control plane node stopped-upgrade-211581 in cluster stopped-upgrade-211581
	I1127 23:56:55.656526  170900 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:56:55.657997  170900 out.go:177] * Pulling base image ...
	I1127 23:56:55.659520  170900 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1127 23:56:55.659642  170900 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:56:55.679038  170900 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:56:55.679061  170900 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	W1127 23:56:55.697671  170900 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1127 23:56:55.697871  170900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/stopped-upgrade-211581/config.json ...
	I1127 23:56:55.697909  170900 cache.go:107] acquiring lock: {Name:mk9e92729a49752bfd048d6b7ac6eb2904673dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.697935  170900 cache.go:107] acquiring lock: {Name:mka77dd2e822e83f3b6b8ee9c876858ae2b332d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.698013  170900 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1127 23:56:55.698033  170900 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 124.122µs
	I1127 23:56:55.698089  170900 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1127 23:56:55.698013  170900 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1127 23:56:55.697993  170900 cache.go:107] acquiring lock: {Name:mkf7652393bb754d7dcb51096be824a1dd596f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.698102  170900 cache.go:107] acquiring lock: {Name:mkd363673b293eb97ca614b8ed2bb456e60ce5f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.698025  170900 cache.go:107] acquiring lock: {Name:mkd0f39bfd5d03d654698430841c64949c3705b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.698151  170900 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 213.577µs
	I1127 23:56:55.698164  170900 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1127 23:56:55.698171  170900 cache.go:107] acquiring lock: {Name:mk09e71bc17b5abe067eb1ba3aea787566fa7949 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.698172  170900 cache.go:107] acquiring lock: {Name:mk1c9582307db87837ce900a4211d899eb4c9293 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.698236  170900 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1127 23:56:55.698245  170900 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1127 23:56:55.698253  170900 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 228.321µs
	I1127 23:56:55.698241  170900 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1127 23:56:55.698273  170900 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1127 23:56:55.698279  170900 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1127 23:56:55.698282  170900 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 307.797µs
	I1127 23:56:55.698275  170900 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1127 23:56:55.698294  170900 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1127 23:56:55.698290  170900 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 161.656µs
	I1127 23:56:55.698358  170900 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1127 23:56:55.698261  170900 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 215.666µs
	I1127 23:56:55.698368  170900 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1127 23:56:55.698291  170900 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 160.237µs
	I1127 23:56:55.698387  170900 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1127 23:56:55.698324  170900 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:56:55.698417  170900 start.go:365] acquiring machines lock for stopped-upgrade-211581: {Name:mkd041e538f41ce332fae7d325224a43f7209d27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.697986  170900 cache.go:107] acquiring lock: {Name:mk5920c65f24682bffd31b8b5858c01ccdbe921b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:56:55.698500  170900 start.go:369] acquired machines lock for "stopped-upgrade-211581" in 65.367µs
	I1127 23:56:55.698507  170900 cache.go:115] /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1127 23:56:55.698523  170900 start.go:96] Skipping create...Using existing machine configuration
	I1127 23:56:55.698528  170900 fix.go:54] fixHost starting: m01
	I1127 23:56:55.698528  170900 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 561.102µs
	I1127 23:56:55.698540  170900 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1127 23:56:55.698557  170900 cache.go:87] Successfully saved all images to host disk.
	I1127 23:56:55.698803  170900 cli_runner.go:164] Run: docker container inspect stopped-upgrade-211581 --format={{.State.Status}}
	I1127 23:56:55.715138  170900 fix.go:102] recreateIfNeeded on stopped-upgrade-211581: state=Stopped err=<nil>
	W1127 23:56:55.715165  170900 fix.go:128] unexpected machine state, will restart: <nil>
	I1127 23:56:55.718011  170900 out.go:177] * Restarting existing docker container for "stopped-upgrade-211581" ...
	I1127 23:56:55.719632  170900 cli_runner.go:164] Run: docker start stopped-upgrade-211581
	I1127 23:56:56.010143  170900 cli_runner.go:164] Run: docker container inspect stopped-upgrade-211581 --format={{.State.Status}}
	I1127 23:56:56.030901  170900 kic.go:430] container "stopped-upgrade-211581" state is running.
	I1127 23:56:56.031298  170900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-211581
	I1127 23:56:56.052767  170900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/stopped-upgrade-211581/config.json ...
	I1127 23:56:56.053005  170900 machine.go:88] provisioning docker machine ...
	I1127 23:56:56.053030  170900 ubuntu.go:169] provisioning hostname "stopped-upgrade-211581"
	I1127 23:56:56.053083  170900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-211581
	I1127 23:56:56.074759  170900 main.go:141] libmachine: Using SSH client type: native
	I1127 23:56:56.075353  170900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I1127 23:56:56.075375  170900 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-211581 && echo "stopped-upgrade-211581" | sudo tee /etc/hostname
	I1127 23:56:56.076035  170900 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43508->127.0.0.1:32949: read: connection reset by peer
	I1127 23:56:59.190150  170900 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-211581
	
	I1127 23:56:59.190250  170900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-211581
	I1127 23:56:59.211054  170900 main.go:141] libmachine: Using SSH client type: native
	I1127 23:56:59.211397  170900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I1127 23:56:59.211422  170900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-211581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-211581/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-211581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:56:59.317951  170900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:56:59.318000  170900 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4554/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4554/.minikube}
	I1127 23:56:59.318033  170900 ubuntu.go:177] setting up certificates
	I1127 23:56:59.318066  170900 provision.go:83] configureAuth start
	I1127 23:56:59.318138  170900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-211581
	I1127 23:56:59.334281  170900 provision.go:138] copyHostCerts
	I1127 23:56:59.334352  170900 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem, removing ...
	I1127 23:56:59.334363  170900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem
	I1127 23:56:59.334435  170900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/cert.pem (1123 bytes)
	I1127 23:56:59.334562  170900 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem, removing ...
	I1127 23:56:59.334574  170900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem
	I1127 23:56:59.334617  170900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/key.pem (1679 bytes)
	I1127 23:56:59.334696  170900 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem, removing ...
	I1127 23:56:59.334703  170900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem
	I1127 23:56:59.334727  170900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4554/.minikube/ca.pem (1078 bytes)
	I1127 23:56:59.334785  170900 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-211581 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-211581]
	I1127 23:56:59.397635  170900 provision.go:172] copyRemoteCerts
	I1127 23:56:59.397708  170900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:56:59.397747  170900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-211581
	I1127 23:56:59.413711  170900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/stopped-upgrade-211581/id_rsa Username:docker}
	I1127 23:56:59.493140  170900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:56:59.509539  170900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1127 23:56:59.525551  170900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:56:59.541127  170900 provision.go:86] duration metric: configureAuth took 223.046531ms
	I1127 23:56:59.541149  170900 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:56:59.541329  170900 config.go:182] Loaded profile config "stopped-upgrade-211581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 23:56:59.541487  170900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-211581
	I1127 23:56:59.556971  170900 main.go:141] libmachine: Using SSH client type: native
	I1127 23:56:59.557284  170900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I1127 23:56:59.557302  170900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:57:00.113927  170900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:57:00.113957  170900 machine.go:91] provisioned docker machine in 4.060939929s
	I1127 23:57:00.113969  170900 start.go:300] post-start starting for "stopped-upgrade-211581" (driver="docker")
	I1127 23:57:00.113992  170900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:57:00.114084  170900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:57:00.114131  170900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-211581
	I1127 23:57:00.132204  170900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/stopped-upgrade-211581/id_rsa Username:docker}
	I1127 23:57:00.217246  170900 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:57:00.220314  170900 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:57:00.220348  170900 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:57:00.220363  170900 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:57:00.220372  170900 info.go:137] Remote host: Ubuntu 19.10
	I1127 23:57:00.220390  170900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/addons for local assets ...
	I1127 23:57:00.220462  170900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4554/.minikube/files for local assets ...
	I1127 23:57:00.220563  170900 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem -> 113062.pem in /etc/ssl/certs
	I1127 23:57:00.220683  170900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:57:00.227658  170900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/ssl/certs/113062.pem --> /etc/ssl/certs/113062.pem (1708 bytes)
	I1127 23:57:00.244920  170900 start.go:303] post-start completed in 130.926938ms
	I1127 23:57:00.245001  170900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:57:00.245044  170900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-211581
	I1127 23:57:00.263781  170900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/stopped-upgrade-211581/id_rsa Username:docker}
	I1127 23:57:00.342624  170900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:57:00.346366  170900 fix.go:56] fixHost completed within 4.64783218s
	I1127 23:57:00.346393  170900 start.go:83] releasing machines lock for "stopped-upgrade-211581", held for 4.647877743s
	I1127 23:57:00.346448  170900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-211581
	I1127 23:57:00.363540  170900 ssh_runner.go:195] Run: cat /version.json
	I1127 23:57:00.363582  170900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-211581
	I1127 23:57:00.363628  170900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:57:00.363686  170900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-211581
	I1127 23:57:00.380023  170900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/stopped-upgrade-211581/id_rsa Username:docker}
	I1127 23:57:00.380571  170900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/stopped-upgrade-211581/id_rsa Username:docker}
	W1127 23:57:00.491641  170900 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1127 23:57:00.491717  170900 ssh_runner.go:195] Run: systemctl --version
	I1127 23:57:00.495557  170900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:57:00.546689  170900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:57:00.551161  170900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:57:00.571949  170900 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:57:00.572039  170900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:57:00.600107  170900 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 23:57:00.600149  170900 start.go:472] detecting cgroup driver to use...
	I1127 23:57:00.600187  170900 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:57:00.600252  170900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:57:00.621044  170900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:57:00.630606  170900 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:57:00.630661  170900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:57:00.638998  170900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:57:00.648325  170900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1127 23:57:00.657489  170900 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1127 23:57:00.657545  170900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:57:00.734311  170900 docker.go:219] disabling docker service ...
	I1127 23:57:00.734378  170900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:57:00.745918  170900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:57:00.756297  170900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:57:00.815627  170900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:57:00.881609  170900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:57:00.890384  170900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:57:00.902161  170900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1127 23:57:00.902232  170900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:57:00.911771  170900 out.go:177] 
	W1127 23:57:00.913310  170900 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1127 23:57:00.913329  170900 out.go:239] * 
	* 
	W1127 23:57:00.914158  170900 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1127 23:57:00.915687  170900 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-211581 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (112.05s)

                                                
                                    

Test pass (281/314)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.14
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 6.05
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.0/json-events 7.74
18 TestDownloadOnly/v1.29.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.21
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
25 TestDownloadOnlyKic 1.28
26 TestBinaryMirror 0.73
27 TestOffline 56.12
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 131.8
34 TestAddons/parallel/Registry 13.52
36 TestAddons/parallel/InspektorGadget 11.02
37 TestAddons/parallel/MetricsServer 5.73
38 TestAddons/parallel/HelmTiller 9.65
40 TestAddons/parallel/CSI 74.39
41 TestAddons/parallel/Headlamp 14.09
42 TestAddons/parallel/CloudSpanner 5.51
43 TestAddons/parallel/LocalPath 9.76
44 TestAddons/parallel/NvidiaDevicePlugin 5.48
47 TestAddons/serial/GCPAuth/Namespaces 0.11
48 TestAddons/StoppedEnableDisable 12.13
49 TestCertOptions 24.79
50 TestCertExpiration 226.1
52 TestForceSystemdFlag 25.38
53 TestForceSystemdEnv 31.99
55 TestKVMDriverInstallOrUpdate 3.58
59 TestErrorSpam/setup 21.93
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.86
62 TestErrorSpam/pause 1.51
63 TestErrorSpam/unpause 1.5
64 TestErrorSpam/stop 1.41
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 66.93
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 39.6
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.85
76 TestFunctional/serial/CacheCmd/cache/add_local 1.13
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 32.04
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.33
87 TestFunctional/serial/LogsFileCmd 1.36
88 TestFunctional/serial/InvalidService 4.27
90 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DashboardCmd 11.16
92 TestFunctional/parallel/DryRun 0.39
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 0.98
98 TestFunctional/parallel/ServiceCmdConnect 10.7
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 32.91
102 TestFunctional/parallel/SSHCmd 0.58
103 TestFunctional/parallel/CpCmd 1.23
104 TestFunctional/parallel/MySQL 22.18
105 TestFunctional/parallel/FileSync 0.31
106 TestFunctional/parallel/CertSync 1.95
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
114 TestFunctional/parallel/License 0.2
115 TestFunctional/parallel/Version/short 0.09
116 TestFunctional/parallel/Version/components 0.72
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
121 TestFunctional/parallel/ImageCommands/ImageBuild 1.82
122 TestFunctional/parallel/ImageCommands/Setup 1.11
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.33
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.58
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.51
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.86
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
142 TestFunctional/parallel/ServiceCmd/DeployApp 10.14
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.91
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.92
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
146 TestFunctional/parallel/ProfileCmd/profile_list 0.34
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
148 TestFunctional/parallel/MountCmd/any-port 5.95
149 TestFunctional/parallel/ServiceCmd/List 0.91
150 TestFunctional/parallel/ServiceCmd/JSONOutput 0.93
151 TestFunctional/parallel/MountCmd/specific-port 1.82
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.6
153 TestFunctional/parallel/ServiceCmd/Format 0.62
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
155 TestFunctional/parallel/ServiceCmd/URL 0.85
156 TestFunctional/delete_addon-resizer_images 0.07
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 68.83
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.8
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
169 TestJSONOutput/start/Command 66.81
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.64
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.59
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.74
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.23
194 TestKicCustomNetwork/create_custom_network 33.01
195 TestKicCustomNetwork/use_default_bridge_network 24.17
196 TestKicExistingNetwork 27.59
197 TestKicCustomSubnet 26.74
198 TestKicStaticIP 27.51
199 TestMainNoArgs 0.06
200 TestMinikubeProfile 53.58
203 TestMountStart/serial/StartWithMountFirst 5.32
204 TestMountStart/serial/VerifyMountFirst 0.25
205 TestMountStart/serial/StartWithMountSecond 8.14
206 TestMountStart/serial/VerifyMountSecond 0.25
207 TestMountStart/serial/DeleteFirst 1.61
208 TestMountStart/serial/VerifyMountPostDelete 0.25
209 TestMountStart/serial/Stop 1.21
210 TestMountStart/serial/RestartStopped 7.03
211 TestMountStart/serial/VerifyMountPostStop 0.26
214 TestMultiNode/serial/FreshStart2Nodes 74.27
215 TestMultiNode/serial/DeployApp2Nodes 3.67
217 TestMultiNode/serial/AddNode 46.94
218 TestMultiNode/serial/ProfileList 0.27
219 TestMultiNode/serial/CopyFile 9.18
220 TestMultiNode/serial/StopNode 2.14
221 TestMultiNode/serial/StartAfterStop 11.11
222 TestMultiNode/serial/RestartKeepsNodes 113.55
223 TestMultiNode/serial/DeleteNode 4.67
224 TestMultiNode/serial/StopMultiNode 23.85
225 TestMultiNode/serial/RestartMultiNode 78.61
226 TestMultiNode/serial/ValidateNameConflict 26.1
231 TestPreload 148.65
233 TestScheduledStopUnix 100.2
236 TestInsufficientStorage 13.14
239 TestKubernetesUpgrade 350.39
240 TestMissingContainerUpgrade 165.93
242 TestStoppedBinaryUpgrade/Setup 0.77
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
244 TestNoKubernetes/serial/StartWithK8s 35.25
246 TestNoKubernetes/serial/StartWithStopK8s 8.77
247 TestNoKubernetes/serial/Start 9.94
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
249 TestNoKubernetes/serial/ProfileList 3.24
257 TestNoKubernetes/serial/Stop 1.66
258 TestNoKubernetes/serial/StartNoArgs 8.87
259 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
260 TestStoppedBinaryUpgrade/MinikubeLogs 0.57
268 TestNetworkPlugins/group/false 4.53
273 TestPause/serial/Start 44.47
274 TestPause/serial/SecondStartNoReconfiguration 28.03
276 TestStartStop/group/old-k8s-version/serial/FirstStart 112.03
277 TestPause/serial/Pause 0.9
278 TestPause/serial/VerifyStatus 0.3
279 TestPause/serial/Unpause 0.97
280 TestPause/serial/PauseAgain 0.84
281 TestPause/serial/DeletePaused 2.81
282 TestPause/serial/VerifyDeletedResources 0.49
284 TestStartStop/group/no-preload/serial/FirstStart 72.37
285 TestStartStop/group/no-preload/serial/DeployApp 8.76
286 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.85
287 TestStartStop/group/no-preload/serial/Stop 11.92
288 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
289 TestStartStop/group/no-preload/serial/SecondStart 336.83
290 TestStartStop/group/old-k8s-version/serial/DeployApp 8.39
291 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.77
292 TestStartStop/group/old-k8s-version/serial/Stop 12.02
293 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
294 TestStartStop/group/old-k8s-version/serial/SecondStart 42.21
296 TestStartStop/group/embed-certs/serial/FirstStart 70.98
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.02
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
300 TestStartStop/group/old-k8s-version/serial/Pause 2.91
302 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.18
304 TestStartStop/group/newest-cni/serial/FirstStart 35.41
305 TestStartStop/group/embed-certs/serial/DeployApp 8.38
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
307 TestStartStop/group/embed-certs/serial/Stop 12.03
308 TestStartStop/group/newest-cni/serial/DeployApp 0
309 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
310 TestStartStop/group/newest-cni/serial/Stop 1.23
311 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/newest-cni/serial/SecondStart 25.72
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
314 TestStartStop/group/embed-certs/serial/SecondStart 338.09
315 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
318 TestStartStop/group/newest-cni/serial/Pause 2.84
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
320 TestNetworkPlugins/group/auto/Start 69.77
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.92
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 337.84
325 TestNetworkPlugins/group/auto/KubeletFlags 0.27
326 TestNetworkPlugins/group/auto/NetCatPod 9.25
327 TestNetworkPlugins/group/auto/DNS 0.15
328 TestNetworkPlugins/group/auto/Localhost 0.14
329 TestNetworkPlugins/group/auto/HairPin 0.13
330 TestNetworkPlugins/group/flannel/Start 57.74
331 TestNetworkPlugins/group/flannel/ControllerPod 5.02
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
333 TestNetworkPlugins/group/flannel/NetCatPod 9.33
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.02
335 TestNetworkPlugins/group/flannel/DNS 0.17
336 TestNetworkPlugins/group/flannel/Localhost 0.16
337 TestNetworkPlugins/group/flannel/HairPin 0.15
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
340 TestStartStop/group/no-preload/serial/Pause 2.83
341 TestNetworkPlugins/group/enable-default-cni/Start 37.63
342 TestNetworkPlugins/group/bridge/Start 38.15
343 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
344 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
346 TestNetworkPlugins/group/bridge/NetCatPod 9.25
347 TestNetworkPlugins/group/enable-default-cni/DNS 32.54
348 TestNetworkPlugins/group/bridge/DNS 0.15
349 TestNetworkPlugins/group/bridge/Localhost 0.13
350 TestNetworkPlugins/group/bridge/HairPin 0.13
351 TestNetworkPlugins/group/calico/Start 63.11
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
354 TestNetworkPlugins/group/kindnet/Start 73.23
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.02
356 TestNetworkPlugins/group/calico/ControllerPod 5.02
357 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
358 TestNetworkPlugins/group/calico/KubeletFlags 0.31
359 TestNetworkPlugins/group/calico/NetCatPod 10.33
360 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
361 TestStartStop/group/embed-certs/serial/Pause 2.9
362 TestNetworkPlugins/group/custom-flannel/Start 55.68
363 TestNetworkPlugins/group/calico/DNS 0.19
364 TestNetworkPlugins/group/calico/Localhost 0.18
365 TestNetworkPlugins/group/calico/HairPin 0.24
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.02
367 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
368 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
369 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
371 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
372 TestNetworkPlugins/group/kindnet/DNS 0.18
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.81
374 TestNetworkPlugins/group/kindnet/Localhost 0.17
375 TestNetworkPlugins/group/kindnet/HairPin 0.17
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.25
378 TestNetworkPlugins/group/custom-flannel/DNS 0.16
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (9.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-824886 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-824886 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.136082273s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-824886
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-824886: exit status 85 (73.098742ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-824886 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-824886        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:24:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:24:43.300273   11317 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:24:43.300515   11317 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:43.300523   11317 out.go:309] Setting ErrFile to fd 2...
	I1127 23:24:43.300528   11317 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:43.300701   11317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	W1127 23:24:43.300811   11317 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-4554/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-4554/.minikube/config/config.json: no such file or directory
	I1127 23:24:43.301350   11317 out.go:303] Setting JSON to true
	I1127 23:24:43.302187   11317 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":436,"bootTime":1701127048,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:24:43.302247   11317 start.go:138] virtualization: kvm guest
	I1127 23:24:43.305077   11317 out.go:97] [download-only-824886] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:24:43.306925   11317 out.go:169] MINIKUBE_LOCATION=17206
	W1127 23:24:43.305198   11317 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball: no such file or directory
	I1127 23:24:43.305231   11317 notify.go:220] Checking for updates...
	I1127 23:24:43.310131   11317 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:24:43.311728   11317 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:24:43.313426   11317 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:24:43.315253   11317 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 23:24:43.318265   11317 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:24:43.318534   11317 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:24:43.339702   11317 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:24:43.339768   11317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:24:43.710680   11317 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-27 23:24:43.701870767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:24:43.710780   11317 docker.go:295] overlay module found
	I1127 23:24:43.712764   11317 out.go:97] Using the docker driver based on user configuration
	I1127 23:24:43.712792   11317 start.go:298] selected driver: docker
	I1127 23:24:43.712798   11317 start.go:902] validating driver "docker" against <nil>
	I1127 23:24:43.712882   11317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:24:43.765282   11317 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-27 23:24:43.756788793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:24:43.765509   11317 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:24:43.766259   11317 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1127 23:24:43.766488   11317 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1127 23:24:43.768801   11317 out.go:169] Using Docker driver with root privileges
	I1127 23:24:43.770386   11317 cni.go:84] Creating CNI manager for ""
	I1127 23:24:43.770406   11317 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:24:43.770418   11317 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:24:43.770426   11317 start_flags.go:323] config:
	{Name:download-only-824886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-824886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:24:43.772203   11317 out.go:97] Starting control plane node download-only-824886 in cluster download-only-824886
	I1127 23:24:43.772219   11317 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:24:43.773661   11317 out.go:97] Pulling base image ...
	I1127 23:24:43.773685   11317 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 23:24:43.773821   11317 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:24:43.788647   11317 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:24:43.788814   11317 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:24:43.788900   11317 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:24:43.809832   11317 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1127 23:24:43.809860   11317 cache.go:56] Caching tarball of preloaded images
	I1127 23:24:43.810013   11317 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 23:24:43.812390   11317 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1127 23:24:43.812410   11317 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:24:43.846257   11317 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1127 23:24:46.815325   11317 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:24:48.211475   11317 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:24:48.211564   11317 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-824886"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-824886 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-824886 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.05465677s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-824886
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-824886: exit status 85 (72.747428ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-824886 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-824886        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-824886 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-824886        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:24:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:24:52.512170   11475 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:24:52.512407   11475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:52.512414   11475 out.go:309] Setting ErrFile to fd 2...
	I1127 23:24:52.512419   11475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:52.512572   11475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	W1127 23:24:52.512687   11475 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-4554/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-4554/.minikube/config/config.json: no such file or directory
	I1127 23:24:52.513094   11475 out.go:303] Setting JSON to true
	I1127 23:24:52.513847   11475 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":445,"bootTime":1701127048,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:24:52.513906   11475 start.go:138] virtualization: kvm guest
	I1127 23:24:52.516086   11475 out.go:97] [download-only-824886] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:24:52.517745   11475 out.go:169] MINIKUBE_LOCATION=17206
	I1127 23:24:52.516234   11475 notify.go:220] Checking for updates...
	I1127 23:24:52.522691   11475 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:24:52.524533   11475 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:24:52.526115   11475 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:24:52.527550   11475 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 23:24:52.530040   11475 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:24:52.530511   11475 config.go:182] Loaded profile config "download-only-824886": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1127 23:24:52.530554   11475 start.go:810] api.Load failed for download-only-824886: filestore "download-only-824886": Docker machine "download-only-824886" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:24:52.530631   11475 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 23:24:52.530665   11475 start.go:810] api.Load failed for download-only-824886: filestore "download-only-824886": Docker machine "download-only-824886" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:24:52.553026   11475 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:24:52.553123   11475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:24:52.602691   11475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-27 23:24:52.593928969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:24:52.602773   11475 docker.go:295] overlay module found
	I1127 23:24:52.604767   11475 out.go:97] Using the docker driver based on existing profile
	I1127 23:24:52.604791   11475 start.go:298] selected driver: docker
	I1127 23:24:52.604796   11475 start.go:902] validating driver "docker" against &{Name:download-only-824886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-824886 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:24:52.604928   11475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:24:52.659399   11475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-27 23:24:52.65158983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:24:52.660531   11475 cni.go:84] Creating CNI manager for ""
	I1127 23:24:52.660561   11475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:24:52.660598   11475 start_flags.go:323] config:
	{Name:download-only-824886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-824886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1127 23:24:52.662722   11475 out.go:97] Starting control plane node download-only-824886 in cluster download-only-824886
	I1127 23:24:52.662747   11475 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:24:52.664341   11475 out.go:97] Pulling base image ...
	I1127 23:24:52.664362   11475 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:24:52.664465   11475 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:24:52.680975   11475 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:24:52.681091   11475 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:24:52.681109   11475 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:24:52.681120   11475 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:24:52.681128   11475 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:24:52.694677   11475 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:24:52.694701   11475 cache.go:56] Caching tarball of preloaded images
	I1127 23:24:52.694821   11475 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:24:52.696657   11475 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1127 23:24:52.696680   11475 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:24:52.724810   11475 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:24:56.797499   11475 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:24:56.797601   11475 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-824886"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/json-events (7.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-824886 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-824886 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.737088457s)
--- PASS: TestDownloadOnly/v1.29.0-rc.0/json-events (7.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-824886
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-824886: exit status 85 (74.07355ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-824886 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-824886           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-824886 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-824886           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-824886 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-824886           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:24:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:24:58.641351   11618 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:24:58.641608   11618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:58.641617   11618 out.go:309] Setting ErrFile to fd 2...
	I1127 23:24:58.641622   11618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:58.641807   11618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	W1127 23:24:58.641927   11618 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-4554/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-4554/.minikube/config/config.json: no such file or directory
	I1127 23:24:58.642348   11618 out.go:303] Setting JSON to true
	I1127 23:24:58.643116   11618 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":451,"bootTime":1701127048,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:24:58.643170   11618 start.go:138] virtualization: kvm guest
	I1127 23:24:58.645280   11618 out.go:97] [download-only-824886] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:24:58.646947   11618 out.go:169] MINIKUBE_LOCATION=17206
	I1127 23:24:58.645446   11618 notify.go:220] Checking for updates...
	I1127 23:24:58.648497   11618 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:24:58.649998   11618 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:24:58.651544   11618 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:24:58.652848   11618 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 23:24:58.655355   11618 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:24:58.655793   11618 config.go:182] Loaded profile config "download-only-824886": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1127 23:24:58.655830   11618 start.go:810] api.Load failed for download-only-824886: filestore "download-only-824886": Docker machine "download-only-824886" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:24:58.655921   11618 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 23:24:58.655949   11618 start.go:810] api.Load failed for download-only-824886: filestore "download-only-824886": Docker machine "download-only-824886" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:24:58.676408   11618 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:24:58.676470   11618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:24:58.725230   11618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-11-27 23:24:58.717326057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:24:58.725315   11618 docker.go:295] overlay module found
	I1127 23:24:58.727330   11618 out.go:97] Using the docker driver based on existing profile
	I1127 23:24:58.727357   11618 start.go:298] selected driver: docker
	I1127 23:24:58.727364   11618 start.go:902] validating driver "docker" against &{Name:download-only-824886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-824886 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:24:58.727517   11618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:24:58.775966   11618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-11-27 23:24:58.76826357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:24:58.776600   11618 cni.go:84] Creating CNI manager for ""
	I1127 23:24:58.776620   11618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:24:58.776632   11618 start_flags.go:323] config:
	{Name:download-only-824886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.0 ClusterName:download-only-824886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1127 23:24:58.778761   11618 out.go:97] Starting control plane node download-only-824886 in cluster download-only-824886
	I1127 23:24:58.778784   11618 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:24:58.780210   11618 out.go:97] Pulling base image ...
	I1127 23:24:58.780234   11618 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1127 23:24:58.780387   11618 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:24:58.795078   11618 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:24:58.795221   11618 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:24:58.795236   11618 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:24:58.795243   11618 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:24:58.795250   11618 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:24:58.813453   11618 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I1127 23:24:58.813470   11618 cache.go:56] Caching tarball of preloaded images
	I1127 23:24:58.813600   11618 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1127 23:24:58.815442   11618 out.go:97] Downloading Kubernetes v1.29.0-rc.0 preload ...
	I1127 23:24:58.815461   11618 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:24:58.853249   11618 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5686edee2f3c2c02d5f5e95cbdafe8b5 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I1127 23:25:02.410988   11618 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:25:02.411070   11618 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:25:03.223858   11618 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.0 on crio
	I1127 23:25:03.223978   11618 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/download-only-824886/config.json ...
	I1127 23:25:03.224171   11618 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1127 23:25:03.224334   11618 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17206-4554/.minikube/cache/linux/amd64/v1.29.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-824886"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-824886
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.28s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-379589 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-379589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-379589
--- PASS: TestDownloadOnlyKic (1.28s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-447370 --alsologtostderr --binary-mirror http://127.0.0.1:35791 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-447370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-447370
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (56.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-180895 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-180895 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (53.208165832s)
helpers_test.go:175: Cleaning up "offline-crio-180895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-180895
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-180895: (2.911016664s)
--- PASS: TestOffline (56.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-931360
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-931360: exit status 85 (60.363831ms)

                                                
                                                
-- stdout --
	* Profile "addons-931360" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-931360"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-931360
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-931360: exit status 85 (63.308116ms)

                                                
                                                
-- stdout --
	* Profile "addons-931360" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-931360"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (131.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-931360 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-931360 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m11.797331098s)
--- PASS: TestAddons/Setup (131.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 14.505302ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qpl48" [0b324504-49a4-4094-95c0-5738fb210318] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011046251s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8qd9d" [2537aa5b-0543-4d68-a3bc-91099fbe1789] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011550158s
addons_test.go:339: (dbg) Run:  kubectl --context addons-931360 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-931360 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-931360 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.695090378s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 ip
2023/11/27 23:27:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jpk87" [1f443605-0248-4fbb-8f40-aa37ad75e3f5] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011719459s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-931360
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-931360: (6.011119536s)
--- PASS: TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.449705ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-7rcvh" [9b447e1e-d353-4274-b7ab-31ae20a302f2] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012658346s
addons_test.go:414: (dbg) Run:  kubectl --context addons-931360 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.65s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.986371ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-6vhzg" [f0db5041-51d0-4f7b-bd01-63edd775e33b] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012041095s
addons_test.go:472: (dbg) Run:  kubectl --context addons-931360 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-931360 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.025100768s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (74.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 15.880546ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-931360 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-931360 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6b15d6b7-1a43-4970-bd2c-dea8ea3d4a11] Pending
helpers_test.go:344: "task-pv-pod" [6b15d6b7-1a43-4970-bd2c-dea8ea3d4a11] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6b15d6b7-1a43-4970-bd2c-dea8ea3d4a11] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.009609819s
addons_test.go:583: (dbg) Run:  kubectl --context addons-931360 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-931360 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-931360 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-931360 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-931360 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-931360 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-931360 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-931360 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b0f2443d-2698-4830-bcfe-8c19924649ee] Pending
helpers_test.go:344: "task-pv-pod-restore" [b0f2443d-2698-4830-bcfe-8c19924649ee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b0f2443d-2698-4830-bcfe-8c19924649ee] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.008466824s
addons_test.go:625: (dbg) Run:  kubectl --context addons-931360 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-931360 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-931360 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-931360 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.55364692s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (74.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-931360 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-931360 --alsologtostderr -v=1: (1.085466578s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-dgszv" [004b9098-b7a6-42d6-945e-84ed319f13b5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-dgszv" [004b9098-b7a6-42d6-945e-84ed319f13b5] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.008002603s
--- PASS: TestAddons/parallel/Headlamp (14.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-5hgx6" [df4ab88b-d8a4-4dfc-bc88-0ce421b166a5] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007786396s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-931360
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.76s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-931360 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-931360 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-931360 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6d07424f-ef21-4569-9619-0666f39987d0] Pending
helpers_test.go:344: "test-local-path" [6d07424f-ef21-4569-9619-0666f39987d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6d07424f-ef21-4569-9619-0666f39987d0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6d07424f-ef21-4569-9619-0666f39987d0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.008624654s
addons_test.go:890: (dbg) Run:  kubectl --context addons-931360 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 ssh "cat /opt/local-path-provisioner/pvc-063e1186-7680-47f4-926d-164851142721_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-931360 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-931360 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-931360 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.76s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-497hr" [3faeada4-0ee7-4d44-81b3-200c71fd40b5] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.021648291s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-931360
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-931360 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-931360 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-931360
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-931360: (11.847032616s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-931360
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-931360
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-931360
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (24.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-546641 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-546641 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.265148279s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-546641 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-546641 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-546641 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-546641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-546641
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-546641: (1.933120321s)
--- PASS: TestCertOptions (24.79s)

                                                
                                    
x
+
TestCertExpiration (226.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-909894 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-909894 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (29.158854758s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-909894 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-909894 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.584991181s)
helpers_test.go:175: Cleaning up "cert-expiration-909894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-909894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-909894: (2.359675658s)
--- PASS: TestCertExpiration (226.10s)

                                                
                                    
x
+
TestForceSystemdFlag (25.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-406023 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-406023 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.743555208s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-406023 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-406023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-406023
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-406023: (2.357623563s)
--- PASS: TestForceSystemdFlag (25.38s)

                                                
                                    
x
+
TestForceSystemdEnv (31.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-794552 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-794552 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.600833205s)
helpers_test.go:175: Cleaning up "force-systemd-env-794552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-794552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-794552: (6.391034618s)
--- PASS: TestForceSystemdEnv (31.99s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.58s)

                                                
                                    
x
+
TestErrorSpam/setup (21.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-796093 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-796093 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-796093 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-796093 --driver=docker  --container-runtime=crio: (21.926623634s)
--- PASS: TestErrorSpam/setup (21.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 stop: (1.211278134s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-796093 --log_dir /tmp/nospam-796093 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17206-4554/.minikube/files/etc/test/nested/copy/11306/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223758 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1127 23:32:20.766301   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:20.771923   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:20.782119   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:20.802379   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:20.842649   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:20.922919   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:21.083204   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:21.403605   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:22.043924   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:23.324695   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-223758 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m6.932354715s)
--- PASS: TestFunctional/serial/StartWithProxy (66.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223758 --alsologtostderr -v=8
E1127 23:32:25.885331   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:31.005850   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:32:41.246024   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:33:01.727270   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-223758 --alsologtostderr -v=8: (39.596939366s)
functional_test.go:659: soft start took 39.597670165s for "functional-223758" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-223758 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-223758 /tmp/TestFunctionalserialCacheCmdcacheadd_local1309266255/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 cache add minikube-local-cache-test:functional-223758
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 cache delete minikube-local-cache-test:functional-223758
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-223758
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.216883ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 kubectl -- --context functional-223758 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-223758 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223758 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1127 23:33:42.687780   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-223758 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.040938305s)
functional_test.go:757: restart took 32.041056192s for "functional-223758" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-223758 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-223758 logs: (1.328987941s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 logs --file /tmp/TestFunctionalserialLogsFileCmd3440426786/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-223758 logs --file /tmp/TestFunctionalserialLogsFileCmd3440426786/001/logs.txt: (1.358645865s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-223758 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-223758
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-223758: exit status 115 (338.540036ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30835 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-223758 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 config get cpus: exit status 14 (68.790896ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 config get cpus: exit status 14 (75.290368ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-223758 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-223758 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 50252: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223758 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-223758 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (155.370765ms)

                                                
                                                
-- stdout --
	* [functional-223758] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:34:24.969393   48656 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:34:24.969567   48656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:24.969580   48656 out.go:309] Setting ErrFile to fd 2...
	I1127 23:34:24.969587   48656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:24.969783   48656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:34:24.970361   48656 out.go:303] Setting JSON to false
	I1127 23:34:24.971577   48656 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1017,"bootTime":1701127048,"procs":643,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:34:24.971644   48656 start.go:138] virtualization: kvm guest
	I1127 23:34:24.973696   48656 out.go:177] * [functional-223758] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:34:24.975376   48656 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:34:24.975308   48656 notify.go:220] Checking for updates...
	I1127 23:34:24.976929   48656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:34:24.978515   48656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:34:24.979884   48656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:34:24.981336   48656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:34:24.982810   48656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:34:24.984757   48656 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:34:24.985225   48656 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:34:25.006689   48656 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:34:25.006775   48656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:34:25.059132   48656 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-27 23:34:25.049646618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:34:25.059237   48656 docker.go:295] overlay module found
	I1127 23:34:25.061121   48656 out.go:177] * Using the docker driver based on existing profile
	I1127 23:34:25.062600   48656 start.go:298] selected driver: docker
	I1127 23:34:25.062613   48656 start.go:902] validating driver "docker" against &{Name:functional-223758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-223758 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:34:25.062705   48656 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:34:25.064855   48656 out.go:177] 
	W1127 23:34:25.066294   48656 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1127 23:34:25.067706   48656 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223758 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223758 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-223758 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (177.544418ms)

                                                
                                                
-- stdout --
	* [functional-223758] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:34:24.800927   48549 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:34:24.801198   48549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:24.801208   48549 out.go:309] Setting ErrFile to fd 2...
	I1127 23:34:24.801213   48549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:24.801574   48549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:34:24.802307   48549 out.go:303] Setting JSON to false
	I1127 23:34:24.803637   48549 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1017,"bootTime":1701127048,"procs":646,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:34:24.803701   48549 start.go:138] virtualization: kvm guest
	I1127 23:34:24.806192   48549 out.go:177] * [functional-223758] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1127 23:34:24.807983   48549 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:34:24.808023   48549 notify.go:220] Checking for updates...
	I1127 23:34:24.809578   48549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:34:24.811265   48549 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:34:24.812809   48549 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:34:24.814238   48549 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:34:24.815563   48549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:34:24.817308   48549 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:34:24.817807   48549 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:34:24.840176   48549 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:34:24.840267   48549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:34:24.901998   48549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-27 23:34:24.893631765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:34:24.902136   48549 docker.go:295] overlay module found
	I1127 23:34:24.904793   48549 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1127 23:34:24.906242   48549 start.go:298] selected driver: docker
	I1127 23:34:24.906258   48549 start.go:902] validating driver "docker" against &{Name:functional-223758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-223758 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:34:24.906332   48549 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:34:24.908558   48549 out.go:177] 
	W1127 23:34:24.909853   48549 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1127 23:34:24.911366   48549 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-223758 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-223758 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-xgdrj" [01a09203-7fd9-4712-ba1a-1788b0ce2d20] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-xgdrj" [01a09203-7fd9-4712-ba1a-1788b0ce2d20] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.010968064s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31888
functional_test.go:1674: http://192.168.49.2:31888: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-xgdrj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31888
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [50448772-580b-46da-a188-834f297ac1e6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011402878s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-223758 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-223758 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-223758 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-223758 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ef96dc82-23d6-4659-916d-834a06f3cb99] Pending
helpers_test.go:344: "sp-pod" [ef96dc82-23d6-4659-916d-834a06f3cb99] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ef96dc82-23d6-4659-916d-834a06f3cb99] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.014365924s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-223758 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-223758 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-223758 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8ceb246d-9b2f-4834-a214-3270a80204ee] Pending
helpers_test.go:344: "sp-pod" [8ceb246d-9b2f-4834-a214-3270a80204ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8ceb246d-9b2f-4834-a214-3270a80204ee] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.011056363s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-223758 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh -n functional-223758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 cp functional-223758:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1090270315/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh -n functional-223758 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-223758 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ndrzm" [959c3a1f-6754-472c-8554-013389926f88] Pending
helpers_test.go:344: "mysql-859648c796-ndrzm" [959c3a1f-6754-472c-8554-013389926f88] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ndrzm" [959c3a1f-6754-472c-8554-013389926f88] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.027449176s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-223758 exec mysql-859648c796-ndrzm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-223758 exec mysql-859648c796-ndrzm -- mysql -ppassword -e "show databases;": exit status 1 (186.396107ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-223758 exec mysql-859648c796-ndrzm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-223758 exec mysql-859648c796-ndrzm -- mysql -ppassword -e "show databases;": exit status 1 (146.344719ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-223758 exec mysql-859648c796-ndrzm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11306/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo cat /etc/test/nested/copy/11306/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11306.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo cat /etc/ssl/certs/11306.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11306.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo cat /usr/share/ca-certificates/11306.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/113062.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo cat /etc/ssl/certs/113062.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/113062.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo cat /usr/share/ca-certificates/113062.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-223758 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 ssh "sudo systemctl is-active docker": exit status 1 (295.281208ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 ssh "sudo systemctl is-active containerd": exit status 1 (340.188341ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223758 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-223758
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223758 image ls --format short --alsologtostderr:
I1127 23:34:27.101337   50210 out.go:296] Setting OutFile to fd 1 ...
I1127 23:34:27.101480   50210 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.101492   50210 out.go:309] Setting ErrFile to fd 2...
I1127 23:34:27.101499   50210 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.101787   50210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
I1127 23:34:27.102628   50210 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.102772   50210 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.103336   50210 cli_runner.go:164] Run: docker container inspect functional-223758 --format={{.State.Status}}
I1127 23:34:27.119171   50210 ssh_runner.go:195] Run: systemctl --version
I1127 23:34:27.119217   50210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-223758
I1127 23:34:27.133850   50210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/functional-223758/id_rsa Username:docker}
I1127 23:34:27.234779   50210 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223758 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| docker.io/library/nginx                 | alpine             | b135667c98980 | 49.5MB |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-223758  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223758 image ls --format table --alsologtostderr:
I1127 23:34:27.839130   50683 out.go:296] Setting OutFile to fd 1 ...
I1127 23:34:27.839407   50683 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.839423   50683 out.go:309] Setting ErrFile to fd 2...
I1127 23:34:27.839431   50683 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.839759   50683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
I1127 23:34:27.840757   50683 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.840924   50683 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.841487   50683 cli_runner.go:164] Run: docker container inspect functional-223758 --format={{.State.Status}}
I1127 23:34:27.861792   50683 ssh_runner.go:195] Run: systemctl --version
I1127 23:34:27.861856   50683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-223758
I1127 23:34:27.880330   50683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/functional-223758/id_rsa Username:docker}
I1127 23:34:27.974347   50683 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223758 image ls --format json --alsologtostderr:
[{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e",
"repoDigests":["docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49538855"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-223758"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDige
sts":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigest
s":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["regis
try.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5
553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223758 image ls --format json --alsologtostderr:
I1127 23:34:27.603404   50546 out.go:296] Setting OutFile to fd 1 ...
I1127 23:34:27.603711   50546 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.603722   50546 out.go:309] Setting ErrFile to fd 2...
I1127 23:34:27.603726   50546 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.603976   50546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
I1127 23:34:27.604636   50546 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.604752   50546 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.605229   50546 cli_runner.go:164] Run: docker container inspect functional-223758 --format={{.State.Status}}
I1127 23:34:27.623787   50546 ssh_runner.go:195] Run: systemctl --version
I1127 23:34:27.623839   50546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-223758
I1127 23:34:27.642866   50546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/functional-223758/id_rsa Username:docker}
I1127 23:34:27.730238   50546 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223758 image ls --format yaml --alsologtostderr:
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-223758
size: "34114467"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests:
- docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "49538855"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223758 image ls --format yaml --alsologtostderr:
I1127 23:34:27.363420   50343 out.go:296] Setting OutFile to fd 1 ...
I1127 23:34:27.363766   50343 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.363789   50343 out.go:309] Setting ErrFile to fd 2...
I1127 23:34:27.363802   50343 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.364060   50343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
I1127 23:34:27.364705   50343 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.364824   50343 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.365267   50343 cli_runner.go:164] Run: docker container inspect functional-223758 --format={{.State.Status}}
I1127 23:34:27.387109   50343 ssh_runner.go:195] Run: systemctl --version
I1127 23:34:27.387187   50343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-223758
I1127 23:34:27.404512   50343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/functional-223758/id_rsa Username:docker}
I1127 23:34:27.498827   50343 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 ssh pgrep buildkitd: exit status 1 (289.350221ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image build -t localhost/my-image:functional-223758 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-223758 image build -t localhost/my-image:functional-223758 testdata/build --alsologtostderr: (1.313167356s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223758 image build -t localhost/my-image:functional-223758 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0c3cb2e8249
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-223758
--> a86041132fa
Successfully tagged localhost/my-image:functional-223758
a86041132fa0f806779a9b640fff5f14f1905c64b7341009d1d1767c818e5350
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223758 image build -t localhost/my-image:functional-223758 testdata/build --alsologtostderr:
I1127 23:34:27.677674   50612 out.go:296] Setting OutFile to fd 1 ...
I1127 23:34:27.677972   50612 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.677981   50612 out.go:309] Setting ErrFile to fd 2...
I1127 23:34:27.677985   50612 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:27.678205   50612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
I1127 23:34:27.678791   50612 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.679291   50612 config.go:182] Loaded profile config "functional-223758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:34:27.679665   50612 cli_runner.go:164] Run: docker container inspect functional-223758 --format={{.State.Status}}
I1127 23:34:27.696945   50612 ssh_runner.go:195] Run: systemctl --version
I1127 23:34:27.696990   50612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-223758
I1127 23:34:27.712706   50612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/functional-223758/id_rsa Username:docker}
I1127 23:34:27.802503   50612 build_images.go:151] Building image from path: /tmp/build.3975437178.tar
I1127 23:34:27.802560   50612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1127 23:34:27.812510   50612 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3975437178.tar
I1127 23:34:27.816034   50612 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3975437178.tar: stat -c "%s %y" /var/lib/minikube/build/build.3975437178.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3975437178.tar': No such file or directory
I1127 23:34:27.816062   50612 ssh_runner.go:362] scp /tmp/build.3975437178.tar --> /var/lib/minikube/build/build.3975437178.tar (3072 bytes)
I1127 23:34:27.843419   50612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3975437178
I1127 23:34:27.851618   50612 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3975437178 -xf /var/lib/minikube/build/build.3975437178.tar
I1127 23:34:27.861664   50612 crio.go:297] Building image: /var/lib/minikube/build/build.3975437178
I1127 23:34:27.861725   50612 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-223758 /var/lib/minikube/build/build.3975437178 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1127 23:34:28.911480   50612 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-223758 /var/lib/minikube/build/build.3975437178 --cgroup-manager=cgroupfs: (1.049728366s)
I1127 23:34:28.911551   50612 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3975437178
I1127 23:34:28.919571   50612 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3975437178.tar
I1127 23:34:28.927072   50612 build_images.go:207] Built localhost/my-image:functional-223758 from /tmp/build.3975437178.tar
I1127 23:34:28.927103   50612 build_images.go:123] succeeded building to: functional-223758
I1127 23:34:28.927108   50612 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls
2023/11/27 23:34:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.078902727s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-223758
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-223758 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-223758 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-223758 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 44030: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-223758 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-223758 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-223758 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8a126d00-c3a8-4f25-a1c8-dcaebf7d2fc3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8a126d00-c3a8-4f25-a1c8-dcaebf7d2fc3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.013089953s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image load --daemon gcr.io/google-containers/addon-resizer:functional-223758 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-223758 image load --daemon gcr.io/google-containers/addon-resizer:functional-223758 --alsologtostderr: (4.353382338s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-223758
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image load --daemon gcr.io/google-containers/addon-resizer:functional-223758 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-223758 image load --daemon gcr.io/google-containers/addon-resizer:functional-223758 --alsologtostderr: (5.540179046s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image save gcr.io/google-containers/addon-resizer:functional-223758 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-223758 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.254.153 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-223758 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image rm gcr.io/google-containers/addon-resizer:functional-223758 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-223758 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-223758 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-nnx2z" [923d8353-e00d-40e2-8fef-a7e6c5098335] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-nnx2z" [923d8353-e00d-40e2-8fef-a7e6c5098335] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.009069628s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-223758 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.258892301s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-223758
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 image save --daemon gcr.io/google-containers/addon-resizer:functional-223758 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-223758
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "283.510625ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "61.074342ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "286.678733ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "59.030277ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdany-port3139952894/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701128058608053952" to /tmp/TestFunctionalparallelMountCmdany-port3139952894/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701128058608053952" to /tmp/TestFunctionalparallelMountCmdany-port3139952894/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701128058608053952" to /tmp/TestFunctionalparallelMountCmdany-port3139952894/001/test-1701128058608053952
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.955915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 27 23:34 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 27 23:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 27 23:34 test-1701128058608053952
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh cat /mount-9p/test-1701128058608053952
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-223758 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c0ee68ff-d044-4834-ae00-584b574d8e2f] Pending
helpers_test.go:344: "busybox-mount" [c0ee68ff-d044-4834-ae00-584b574d8e2f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c0ee68ff-d044-4834-ae00-584b574d8e2f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c0ee68ff-d044-4834-ae00-584b574d8e2f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.012816089s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-223758 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdany-port3139952894/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 service list -o json
functional_test.go:1493: Took "926.377064ms" to run "out/minikube-linux-amd64 -p functional-223758 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdspecific-port4250931533/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.941662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdspecific-port4250931533/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 ssh "sudo umount -f /mount-9p": exit status 1 (375.329458ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-223758 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdspecific-port4250931533/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31364
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2236456127/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2236456127/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2236456127/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T" /mount1: exit status 1 (476.231389ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-223758 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2236456127/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2236456127/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2236456127/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-223758 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31364
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.85s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-223758
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-223758
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-223758
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (68.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-719415 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1127 23:35:04.608321   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-719415 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m8.831804172s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (68.83s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-719415 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-719415 addons enable ingress --alsologtostderr -v=5: (10.795490049s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.80s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-719415 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-187504 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1127 23:39:11.526291   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:39:32.007264   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-187504 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.806494227s)
--- PASS: TestJSONOutput/start/Command (66.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-187504 --output=json --user=testUser
E1127 23:40:12.968001   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-187504 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-187504 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-187504 --output=json --user=testUser: (5.736044598s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-954783 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-954783 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.065694ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e609666e-5c43-4ece-9ced-38b081ee6761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-954783] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6097cf2-6d12-49fc-be7d-04a77539c393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17206"}}
	{"specversion":"1.0","id":"c5ed5def-d703-41d8-a118-fb83f41e0775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cf2944ff-58fe-4fa3-ab3f-a8d2fb26dff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig"}}
	{"specversion":"1.0","id":"35fec708-0a04-4987-87ba-0b3af35a4899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube"}}
	{"specversion":"1.0","id":"29ea632f-d796-47bd-8f73-2aa043af75f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"39f5803a-4d6f-44cd-9e4e-c83150ea012b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9490d22b-3585-4ee0-9a93-ca6e989e5029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-954783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-954783
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-077176 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-077176 --network=: (30.980498806s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-077176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-077176
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-077176: (2.0098518s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.01s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-260421 --network=bridge
E1127 23:41:00.815544   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:00.820837   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:00.831164   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:00.851469   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:00.891793   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:00.972141   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:01.132678   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:01.453291   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:02.094270   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:03.374925   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:05.935169   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
E1127 23:41:11.056277   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-260421 --network=bridge: (22.228002278s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-260421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-260421
E1127 23:41:21.297187   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-260421: (1.920374543s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.17s)

                                                
                                    
x
+
TestKicExistingNetwork (27.59s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-822079 --network=existing-network
E1127 23:41:34.888642   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
E1127 23:41:41.777428   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-822079 --network=existing-network: (25.584844488s)
helpers_test.go:175: Cleaning up "existing-network-822079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-822079
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-822079: (1.875217369s)
--- PASS: TestKicExistingNetwork (27.59s)

                                                
                                    
x
+
TestKicCustomSubnet (26.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-605978 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-605978 --subnet=192.168.60.0/24: (24.630577331s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-605978 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-605978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-605978
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-605978: (2.092507577s)
--- PASS: TestKicCustomSubnet (26.74s)

                                                
                                    
x
+
TestKicStaticIP (27.51s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-660369 --static-ip=192.168.200.200
E1127 23:42:20.768760   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:42:22.737761   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-660369 --static-ip=192.168.200.200: (25.697890546s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-660369 ip
helpers_test.go:175: Cleaning up "static-ip-660369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-660369
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-660369: (1.674940662s)
--- PASS: TestKicStaticIP (27.51s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (53.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-667087 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-667087 --driver=docker  --container-runtime=crio: (24.347429426s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-672137 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-672137 --driver=docker  --container-runtime=crio: (24.088178681s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-667087
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-672137
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-672137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-672137
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-672137: (1.863150004s)
helpers_test.go:175: Cleaning up "first-667087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-667087
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-667087: (2.241810672s)
--- PASS: TestMinikubeProfile (53.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-259749 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-259749 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.316759867s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-259749 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-272109 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1127 23:43:44.658755   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-272109 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.137900299s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-272109 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-259749 --alsologtostderr -v=5
E1127 23:43:51.044000   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-259749 --alsologtostderr -v=5: (1.607579918s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-272109 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-272109
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-272109: (1.211552531s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-272109
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-272109: (6.029175639s)
--- PASS: TestMountStart/serial/RestartStopped (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-272109 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-595051 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1127 23:44:18.729631   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-595051 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.820959834s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-595051 -- rollout status deployment/busybox: (1.901257046s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-8pbpd -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-zp72z -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-8pbpd -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-zp72z -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-8pbpd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-595051 -- exec busybox-5bc68d56bd-zp72z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-595051 -v 3 --alsologtostderr
E1127 23:46:00.815788   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-595051 -v 3 --alsologtostderr: (46.351685108s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.94s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp testdata/cp-test.txt multinode-595051:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3563223297/001/cp-test_multinode-595051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051:/home/docker/cp-test.txt multinode-595051-m02:/home/docker/cp-test_multinode-595051_multinode-595051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m02 "sudo cat /home/docker/cp-test_multinode-595051_multinode-595051-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051:/home/docker/cp-test.txt multinode-595051-m03:/home/docker/cp-test_multinode-595051_multinode-595051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m03 "sudo cat /home/docker/cp-test_multinode-595051_multinode-595051-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp testdata/cp-test.txt multinode-595051-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3563223297/001/cp-test_multinode-595051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051-m02:/home/docker/cp-test.txt multinode-595051:/home/docker/cp-test_multinode-595051-m02_multinode-595051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051 "sudo cat /home/docker/cp-test_multinode-595051-m02_multinode-595051.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051-m02:/home/docker/cp-test.txt multinode-595051-m03:/home/docker/cp-test_multinode-595051-m02_multinode-595051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m03 "sudo cat /home/docker/cp-test_multinode-595051-m02_multinode-595051-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp testdata/cp-test.txt multinode-595051-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3563223297/001/cp-test_multinode-595051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051-m03:/home/docker/cp-test.txt multinode-595051:/home/docker/cp-test_multinode-595051-m03_multinode-595051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051 "sudo cat /home/docker/cp-test_multinode-595051-m03_multinode-595051.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 cp multinode-595051-m03:/home/docker/cp-test.txt multinode-595051-m02:/home/docker/cp-test_multinode-595051-m03_multinode-595051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 ssh -n multinode-595051-m02 "sudo cat /home/docker/cp-test_multinode-595051-m03_multinode-595051-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-595051 node stop m03: (1.202147078s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-595051 status: exit status 7 (456.689614ms)

                                                
                                                
-- stdout --
	multinode-595051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-595051-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-595051-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-595051 status --alsologtostderr: exit status 7 (479.542501ms)

                                                
                                                
-- stdout --
	multinode-595051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-595051-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-595051-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:46:22.113299  110425 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:46:22.113462  110425 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:46:22.113474  110425 out.go:309] Setting ErrFile to fd 2...
	I1127 23:46:22.113479  110425 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:46:22.113697  110425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:46:22.113894  110425 out.go:303] Setting JSON to false
	I1127 23:46:22.113929  110425 mustload.go:65] Loading cluster: multinode-595051
	I1127 23:46:22.114043  110425 notify.go:220] Checking for updates...
	I1127 23:46:22.114390  110425 config.go:182] Loaded profile config "multinode-595051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:46:22.114407  110425 status.go:255] checking status of multinode-595051 ...
	I1127 23:46:22.114828  110425 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Status}}
	I1127 23:46:22.132193  110425 status.go:330] multinode-595051 host status = "Running" (err=<nil>)
	I1127 23:46:22.132226  110425 host.go:66] Checking if "multinode-595051" exists ...
	I1127 23:46:22.132502  110425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-595051
	I1127 23:46:22.149236  110425 host.go:66] Checking if "multinode-595051" exists ...
	I1127 23:46:22.149479  110425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:46:22.149524  110425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051
	I1127 23:46:22.166423  110425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051/id_rsa Username:docker}
	I1127 23:46:22.255106  110425 ssh_runner.go:195] Run: systemctl --version
	I1127 23:46:22.258925  110425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:46:22.268840  110425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:46:22.322241  110425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-11-27 23:46:22.313228726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:46:22.322801  110425 kubeconfig.go:92] found "multinode-595051" server: "https://192.168.58.2:8443"
	I1127 23:46:22.322828  110425 api_server.go:166] Checking apiserver status ...
	I1127 23:46:22.322868  110425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:46:22.333076  110425 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	I1127 23:46:22.342013  110425 api_server.go:182] apiserver freezer: "2:freezer:/docker/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/crio/crio-0b79bec1eef507cd6c999f89abe9137f6ef828e5a098739ced15b622587d551d"
	I1127 23:46:22.342165  110425 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c6c4601dedfe3c650ee48be59f93374b4667adfe091881024e85eb053a15593b/crio/crio-0b79bec1eef507cd6c999f89abe9137f6ef828e5a098739ced15b622587d551d/freezer.state
	I1127 23:46:22.349843  110425 api_server.go:204] freezer state: "THAWED"
	I1127 23:46:22.349875  110425 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1127 23:46:22.354039  110425 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1127 23:46:22.354106  110425 status.go:421] multinode-595051 apiserver status = Running (err=<nil>)
	I1127 23:46:22.354126  110425 status.go:257] multinode-595051 status: &{Name:multinode-595051 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:46:22.354145  110425 status.go:255] checking status of multinode-595051-m02 ...
	I1127 23:46:22.354373  110425 cli_runner.go:164] Run: docker container inspect multinode-595051-m02 --format={{.State.Status}}
	I1127 23:46:22.372970  110425 status.go:330] multinode-595051-m02 host status = "Running" (err=<nil>)
	I1127 23:46:22.372995  110425 host.go:66] Checking if "multinode-595051-m02" exists ...
	I1127 23:46:22.373322  110425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-595051-m02
	I1127 23:46:22.389529  110425 host.go:66] Checking if "multinode-595051-m02" exists ...
	I1127 23:46:22.389770  110425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:46:22.389824  110425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-595051-m02
	I1127 23:46:22.406370  110425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17206-4554/.minikube/machines/multinode-595051-m02/id_rsa Username:docker}
	I1127 23:46:22.495196  110425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:46:22.505265  110425 status.go:257] multinode-595051-m02 status: &{Name:multinode-595051-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:46:22.505298  110425 status.go:255] checking status of multinode-595051-m03 ...
	I1127 23:46:22.505532  110425 cli_runner.go:164] Run: docker container inspect multinode-595051-m03 --format={{.State.Status}}
	I1127 23:46:22.522091  110425 status.go:330] multinode-595051-m03 host status = "Stopped" (err=<nil>)
	I1127 23:46:22.522115  110425 status.go:343] host is not running, skipping remaining checks
	I1127 23:46:22.522125  110425 status.go:257] multinode-595051-m03 status: &{Name:multinode-595051-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 node start m03 --alsologtostderr
E1127 23:46:28.499246   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-595051 node start m03 --alsologtostderr: (10.441788697s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-595051
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-595051
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-595051: (24.783553219s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-595051 --wait=true -v=8 --alsologtostderr
E1127 23:47:20.766398   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-595051 --wait=true -v=8 --alsologtostderr: (1m28.652505188s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-595051
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-595051 node delete m03: (4.089900574s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 stop
E1127 23:48:43.810510   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1127 23:48:51.043629   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-595051 stop: (23.66052484s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-595051 status: exit status 7 (97.016289ms)

                                                
                                                
-- stdout --
	multinode-595051
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-595051-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-595051 status --alsologtostderr: exit status 7 (92.095996ms)

                                                
                                                
-- stdout --
	multinode-595051
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-595051-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:48:55.666145  120656 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:48:55.666281  120656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:48:55.666290  120656 out.go:309] Setting ErrFile to fd 2...
	I1127 23:48:55.666294  120656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:48:55.666461  120656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:48:55.666628  120656 out.go:303] Setting JSON to false
	I1127 23:48:55.666655  120656 mustload.go:65] Loading cluster: multinode-595051
	I1127 23:48:55.666772  120656 notify.go:220] Checking for updates...
	I1127 23:48:55.667036  120656 config.go:182] Loaded profile config "multinode-595051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:48:55.667050  120656 status.go:255] checking status of multinode-595051 ...
	I1127 23:48:55.667495  120656 cli_runner.go:164] Run: docker container inspect multinode-595051 --format={{.State.Status}}
	I1127 23:48:55.684342  120656 status.go:330] multinode-595051 host status = "Stopped" (err=<nil>)
	I1127 23:48:55.684371  120656 status.go:343] host is not running, skipping remaining checks
	I1127 23:48:55.684377  120656 status.go:257] multinode-595051 status: &{Name:multinode-595051 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:48:55.684401  120656 status.go:255] checking status of multinode-595051-m02 ...
	I1127 23:48:55.684624  120656 cli_runner.go:164] Run: docker container inspect multinode-595051-m02 --format={{.State.Status}}
	I1127 23:48:55.701585  120656 status.go:330] multinode-595051-m02 host status = "Stopped" (err=<nil>)
	I1127 23:48:55.701625  120656 status.go:343] host is not running, skipping remaining checks
	I1127 23:48:55.701631  120656 status.go:257] multinode-595051-m02 status: &{Name:multinode-595051-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-595051 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-595051 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.025566416s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-595051 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-595051
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-595051-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-595051-m02 --driver=docker  --container-runtime=crio: exit status 14 (84.603904ms)

                                                
                                                
-- stdout --
	* [multinode-595051-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-595051-m02' is duplicated with machine name 'multinode-595051-m02' in profile 'multinode-595051'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-595051-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-595051-m03 --driver=docker  --container-runtime=crio: (23.791955769s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-595051
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-595051: exit status 80 (274.108853ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-595051
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-595051-m03 already exists in multinode-595051-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-595051-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-595051-m03: (1.891438243s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.10s)

                                                
                                    
x
+
TestPreload (148.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-608738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1127 23:51:00.815008   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-608738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m11.733632059s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-608738 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-608738
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-608738: (5.740949281s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-608738 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1127 23:52:20.766430   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-608738 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m7.906459305s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-608738 image list
helpers_test.go:175: Cleaning up "test-preload-608738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-608738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-608738: (2.22926471s)
--- PASS: TestPreload (148.65s)

                                                
                                    
x
+
TestScheduledStopUnix (100.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-825805 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-825805 --memory=2048 --driver=docker  --container-runtime=crio: (24.717805324s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825805 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-825805 -n scheduled-stop-825805
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825805 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825805 --cancel-scheduled
E1127 23:53:51.043828   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825805 -n scheduled-stop-825805
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-825805
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825805 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-825805
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-825805: exit status 7 (75.812914ms)

                                                
                                                
-- stdout --
	scheduled-stop-825805
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825805 -n scheduled-stop-825805
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825805 -n scheduled-stop-825805: exit status 7 (77.163003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-825805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-825805
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-825805: (4.051271941s)
--- PASS: TestScheduledStopUnix (100.20s)

                                                
                                    
x
+
TestInsufficientStorage (13.14s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-091621 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-091621 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.717872578s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f75b5fd4-d51f-47b3-90fe-cdc7f87c28c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-091621] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"07728f85-4be6-497a-8c49-075a1f83917c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17206"}}
	{"specversion":"1.0","id":"c73e17ad-27c3-4188-87fd-0a271c71524c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a298b417-6a5e-4666-ab9f-1e2659cf2133","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig"}}
	{"specversion":"1.0","id":"b34fa97c-97e6-494e-a0f1-1da894e4947f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube"}}
	{"specversion":"1.0","id":"b418eec9-81f1-4d26-bd70-9a2ae65e120a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b4395db3-a335-4b5f-9e04-7d8a99284320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b4ee5248-1da4-48c5-b962-9ab37058dcea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"01f1e6cb-5ef6-4a90-b56a-7fb4b28881e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"763a8946-b02d-4158-bf93-5c76b0a07550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed213110-63e5-4dfb-bc29-80a62ab33f15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6ac008b4-71e2-4518-8d0c-7d68a8092333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-091621 in cluster insufficient-storage-091621","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1735a23-a88e-4408-b7d8-c0869fd6f82b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f14feb5e-9130-488b-b338-97f1de51a685","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"270286c9-29aa-4467-9626-afcb423eeec1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-091621 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-091621 --output=json --layout=cluster: exit status 7 (272.683458ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-091621","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-091621","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 23:55:05.935569  142425 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-091621" does not appear in /home/jenkins/minikube-integration/17206-4554/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-091621 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-091621 --output=json --layout=cluster: exit status 7 (278.896941ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-091621","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-091621","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 23:55:06.215600  142515 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-091621" does not appear in /home/jenkins/minikube-integration/17206-4554/kubeconfig
	E1127 23:55:06.225303  142515 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/insufficient-storage-091621/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-091621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-091621
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-091621: (1.874436894s)
--- PASS: TestInsufficientStorage (13.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-406137 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-406137 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.256330037s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-406137
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-406137: (1.367095324s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-406137 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-406137 status --format={{.Host}}: exit status 7 (101.239547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-406137 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-406137 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.560454246s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-406137 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-406137 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-406137 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (104.66841ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-406137] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-406137
	    minikube start -p kubernetes-upgrade-406137 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4061372 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-406137 --kubernetes-version=v1.29.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-406137 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-406137 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.700437733s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-406137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-406137
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-406137: (2.208678095s)
--- PASS: TestKubernetesUpgrade (350.39s)

                                                
                                    
x
+
TestMissingContainerUpgrade (165.93s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.3685669836.exe start -p missing-upgrade-231159 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.3685669836.exe start -p missing-upgrade-231159 --memory=2200 --driver=docker  --container-runtime=crio: (1m30.001854307s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-231159
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-231159: (5.038954256s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-231159
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-231159 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-231159 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.172144554s)
helpers_test.go:175: Cleaning up "missing-upgrade-231159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-231159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-231159: (2.115673719s)
--- PASS: TestMissingContainerUpgrade (165.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200892 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-200892 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (95.414425ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-200892] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200892 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200892 --driver=docker  --container-runtime=crio: (34.833909226s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-200892 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200892 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200892 --no-kubernetes --driver=docker  --container-runtime=crio: (6.113628952s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-200892 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-200892 status -o json: exit status 2 (377.951375ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-200892","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-200892
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-200892: (2.278362453s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200892 --no-kubernetes --driver=docker  --container-runtime=crio
E1127 23:56:00.815812   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200892 --no-kubernetes --driver=docker  --container-runtime=crio: (9.941743919s)
--- PASS: TestNoKubernetes/serial/Start (9.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-200892 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-200892 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.567079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.506509467s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-200892
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-200892: (1.656932035s)
--- PASS: TestNoKubernetes/serial/Stop (1.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200892 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200892 --driver=docker  --container-runtime=crio: (8.867657703s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-200892 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-200892 "sudo systemctl is-active --quiet service kubelet": exit status 1 (359.050693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-211581
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-445585 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-445585 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (228.485872ms)

                                                
                                                
-- stdout --
	* [false-445585] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:57:22.116650  181786 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:57:22.116782  181786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:57:22.116790  181786 out.go:309] Setting ErrFile to fd 2...
	I1127 23:57:22.116795  181786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:57:22.117003  181786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4554/.minikube/bin
	I1127 23:57:22.117572  181786 out.go:303] Setting JSON to false
	I1127 23:57:22.118848  181786 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2394,"bootTime":1701127048,"procs":458,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:57:22.118915  181786 start.go:138] virtualization: kvm guest
	I1127 23:57:22.122426  181786 out.go:177] * [false-445585] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:57:22.123764  181786 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:57:22.125101  181786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:57:22.123841  181786 notify.go:220] Checking for updates...
	I1127 23:57:22.127924  181786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4554/kubeconfig
	I1127 23:57:22.129410  181786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4554/.minikube
	I1127 23:57:22.130806  181786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:57:22.132216  181786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:57:22.133952  181786 config.go:182] Loaded profile config "force-systemd-env-794552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:57:22.134070  181786 config.go:182] Loaded profile config "kubernetes-upgrade-406137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1127 23:57:22.134174  181786 config.go:182] Loaded profile config "missing-upgrade-231159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1127 23:57:22.134255  181786 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:57:22.172972  181786 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:57:22.173077  181786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:57:22.252232  181786 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:81 SystemTime:2023-11-27 23:57:22.237784704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1127 23:57:22.252378  181786 docker.go:295] overlay module found
	I1127 23:57:22.254410  181786 out.go:177] * Using the docker driver based on user configuration
	I1127 23:57:22.255875  181786 start.go:298] selected driver: docker
	I1127 23:57:22.255897  181786 start.go:902] validating driver "docker" against <nil>
	I1127 23:57:22.255913  181786 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:57:22.258605  181786 out.go:177] 
	W1127 23:57:22.260096  181786 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1127 23:57:22.261468  181786 out.go:177] 

                                                
                                                
** /stderr **
E1127 23:57:23.859430   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-445585 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-445585" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 23:57:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-406137
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt
server: https://127.0.0.1:32923
name: missing-upgrade-231159
contexts:
- context:
cluster: kubernetes-upgrade-406137
user: kubernetes-upgrade-406137
name: kubernetes-upgrade-406137
- context:
cluster: missing-upgrade-231159
user: missing-upgrade-231159
name: missing-upgrade-231159
current-context: kubernetes-upgrade-406137
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-406137
user:
client-certificate: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/kubernetes-upgrade-406137/client.crt
client-key: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/kubernetes-upgrade-406137/client.key
- name: missing-upgrade-231159
user:
client-certificate: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/missing-upgrade-231159/client.crt
client-key: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/missing-upgrade-231159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-445585

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-445585"

                                                
                                                
----------------------- debugLogs end: false-445585 [took: 4.118288443s] --------------------------------
helpers_test.go:175: Cleaning up "false-445585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-445585
--- PASS: TestNetworkPlugins/group/false (4.53s)

                                                
                                    
x
+
TestPause/serial/Start (44.47s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-567164 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-567164 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.472615913s)
--- PASS: TestPause/serial/Start (44.47s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-567164 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-567164 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.016622396s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (112.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-791514 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-791514 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m52.027532382s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (112.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-567164 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-567164 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-567164 --output=json --layout=cluster: exit status 2 (304.051037ms)

                                                
                                                
-- stdout --
	{"Name":"pause-567164","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-567164","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-567164 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.97s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-567164 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-567164 --alsologtostderr -v=5
E1127 23:58:51.043701   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-567164 --alsologtostderr -v=5: (2.809427901s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-567164
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-567164: exit status 1 (16.430118ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-567164: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-190318 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-190318 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (1m12.367024676s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-190318 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [46539796-3bbf-430f-a70c-d99353a2d58f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [46539796-3bbf-430f-a70c-d99353a2d58f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.013372076s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-190318 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-190318 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-190318 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-190318 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-190318 --alsologtostderr -v=3: (11.919361768s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190318 -n no-preload-190318
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190318 -n no-preload-190318: exit status 7 (81.350205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-190318 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-190318 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-190318 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (5m36.483669273s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-190318 -n no-preload-190318
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-791514 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [552579e8-2518-4e71-bcdc-9ccf88c284fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [552579e8-2518-4e71-bcdc-9ccf88c284fe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013748539s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-791514 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-791514 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-791514 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-791514 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-791514 --alsologtostderr -v=3: (12.017455197s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-791514 -n old-k8s-version-791514
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-791514 -n old-k8s-version-791514: exit status 7 (90.287115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-791514 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (42.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-791514 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1128 00:01:00.815462   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-791514 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (41.858488181s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-791514 -n old-k8s-version-791514
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (42.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-992445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-992445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m10.9835101s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ck56w" [92e9299e-9b13-4b7a-bfc9-d7c840f1854e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ck56w" [92e9299e-9b13-4b7a-bfc9-d7c840f1854e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.014816888s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ck56w" [92e9299e-9b13-4b7a-bfc9-d7c840f1854e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008555426s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-791514 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-791514 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-791514 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-791514 -n old-k8s-version-791514
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-791514 -n old-k8s-version-791514: exit status 2 (331.286434ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-791514 -n old-k8s-version-791514
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-791514 -n old-k8s-version-791514: exit status 2 (320.536652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-791514 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-791514 -n old-k8s-version-791514
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-791514 -n old-k8s-version-791514
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-665567 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-665567 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m10.176502356s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-151872 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 00:02:20.766869   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-151872 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (35.408508446s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-992445 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0248a8bc-9288-415b-97f6-6bf85c2657ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0248a8bc-9288-415b-97f6-6bf85c2657ab] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.01608911s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-992445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-992445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-992445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-992445 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-992445 --alsologtostderr -v=3: (12.032746424s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-151872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-151872 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-151872 --alsologtostderr -v=3: (1.232935525s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-151872 -n newest-cni-151872
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-151872 -n newest-cni-151872: exit status 7 (75.321372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-151872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-151872 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-151872 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (25.37794723s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-151872 -n newest-cni-151872
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992445 -n embed-certs-992445
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992445 -n embed-certs-992445: exit status 7 (79.327539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-992445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-992445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-992445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m37.743099015s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992445 -n embed-certs-992445
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-151872 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-151872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-151872 -n newest-cni-151872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-151872 -n newest-cni-151872: exit status 2 (331.226878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-151872 -n newest-cni-151872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-151872 -n newest-cni-151872: exit status 2 (337.759396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-151872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-151872 -n newest-cni-151872
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-151872 -n newest-cni-151872
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-665567 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7cd96a60-aff8-461f-ac02-848edb3b1091] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7cd96a60-aff8-461f-ac02-848edb3b1091] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.016856125s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-665567 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m9.766290346s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-665567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-665567 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-665567 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-665567 --alsologtostderr -v=3: (11.92239732s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567: exit status 7 (90.7229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-665567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-665567 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 00:03:51.043286   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-665567 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m37.468006366s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-445585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-445585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hf7hf" [2558b2dc-4d52-4020-9f7e-631e94bc33ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hf7hf" [2558b2dc-4d52-4020-9f7e-631e94bc33ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.008996157s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-445585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1128 00:05:23.811064   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
E1128 00:05:36.452953   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:36.458254   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:36.468569   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:36.488880   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:36.529167   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:36.609506   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:36.769921   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:37.090572   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:37.731095   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:39.011941   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:41.572531   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
E1128 00:05:46.693353   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (57.743818911s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dz9sj" [62d8cbb2-f888-43c7-8d55-5c324b177838] Running
E1128 00:05:56.934455   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.015700188s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-445585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-445585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6jfqf" [7d1b12d8-d954-4b5a-b698-fb4f8bf68d09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 00:06:00.815510   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/ingress-addon-legacy-719415/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6jfqf" [7d1b12d8-d954-4b5a-b698-fb4f8bf68d09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.016769575s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z7tjz" [56584e99-e7c5-48b5-b16f-99bfd64d0725] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z7tjz" [56584e99-e7c5-48b5-b16f-99bfd64d0725] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.016608848s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-445585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z7tjz" [56584e99-e7c5-48b5-b16f-99bfd64d0725] Running
E1128 00:06:17.414641   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00947012s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-190318 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-190318 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-190318 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-190318 -n no-preload-190318
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-190318 -n no-preload-190318: exit status 2 (311.300416ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-190318 -n no-preload-190318
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-190318 -n no-preload-190318: exit status 2 (312.195384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-190318 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-190318 -n no-preload-190318
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-190318 -n no-preload-190318
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (37.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (37.626693181s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (37.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1128 00:06:58.375318   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.14876148s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-445585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-445585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gfkzp" [607d5e4a-97ef-48bd-a809-1f1d43ffe5c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gfkzp" [607d5e4a-97ef-48bd-a809-1f1d43ffe5c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.009224675s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-445585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-445585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dlvkq" [126f0350-95cb-4338-a52a-2c5d9daa4edf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dlvkq" [126f0350-95cb-4338-a52a-2c5d9daa4edf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.009020318s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (32.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-445585 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-445585 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153376519s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-445585 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-445585 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15950222s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-445585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (32.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-445585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.108273519s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1128 00:08:20.296071   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/old-k8s-version-791514/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m13.228814141s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f5dw" [2509b963-3295-4c87-8ae0-eca623d1a62e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f5dw" [2509b963-3295-4c87-8ae0-eca623d1a62e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.015043441s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-98g4l" [2f5ef0cb-d3be-410e-bceb-92b89512b428] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021651574s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f5dw" [2509b963-3295-4c87-8ae0-eca623d1a62e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010342501s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-992445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-445585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-445585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t9ngm" [7a2c6255-2382-49ce-9ecc-c44851cc4ac1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t9ngm" [7a2c6255-2382-49ce-9ecc-c44851cc4ac1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.01600797s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-992445 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-992445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-992445 -n embed-certs-992445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-992445 -n embed-certs-992445: exit status 2 (304.821437ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-992445 -n embed-certs-992445
E1128 00:08:51.043668   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/functional-223758/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-992445 -n embed-certs-992445: exit status 2 (297.112412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-992445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-992445 -n embed-certs-992445
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-992445 -n embed-certs-992445
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-445585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.683153232s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-445585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8h2zd" [13bbdc5c-bb5b-4bf4-a27a-4c9e65e1f78e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8h2zd" [13bbdc5c-bb5b-4bf4-a27a-4c9e65e1f78e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.018445474s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kpgz4" [3dd98c2f-bd91-4812-bb40-7ff0b23af0b9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01657812s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-445585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-445585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ttcfg" [0debfb88-0bb1-46fc-be28-baee5c0ee160] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 00:09:29.248165   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:29.253466   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:29.263745   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:29.284030   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:29.324331   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:29.404690   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:29.565045   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:29.885915   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:30.526460   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ttcfg" [0debfb88-0bb1-46fc-be28-baee5c0ee160] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.008714859s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8h2zd" [13bbdc5c-bb5b-4bf4-a27a-4c9e65e1f78e] Running
E1128 00:09:31.807040   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
E1128 00:09:34.368039   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010074123s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-665567 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-665567 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-445585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-665567 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567: exit status 2 (316.756412ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567: exit status 2 (296.420701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-665567 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567
E1128 00:09:39.488954   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-665567 -n default-k8s-diff-port-665567
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)
E1128 00:09:49.729935   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/auto-445585/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-445585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-445585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nnjzd" [bcab3a43-449d-4b51-9706-4e832ebe8626] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nnjzd" [bcab3a43-449d-4b51-9706-4e832ebe8626] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.009257008s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-445585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-445585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    

Test skip (27/314)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-713616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-713616
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E1127 23:57:20.766339   11306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/addons-931360/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: kubenet-445585 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-445585" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt
server: https://127.0.0.1:32923
name: missing-upgrade-231159
contexts:
- context:
cluster: missing-upgrade-231159
user: missing-upgrade-231159
name: missing-upgrade-231159
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-231159
user:
client-certificate: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/missing-upgrade-231159/client.crt
client-key: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/missing-upgrade-231159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-445585

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-445585"

                                                
                                                
----------------------- debugLogs end: kubenet-445585 [took: 4.37782483s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-445585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-445585
--- SKIP: TestNetworkPlugins/group/kubenet (4.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-445585 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-445585" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 23:57:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-env-794552
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Nov 2023 23:57:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-406137
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4554/.minikube/ca.crt
server: https://127.0.0.1:32923
name: missing-upgrade-231159
contexts:
- context:
cluster: force-systemd-env-794552
extensions:
- extension:
last-update: Mon, 27 Nov 2023 23:57:28 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-env-794552
name: force-systemd-env-794552
- context:
cluster: kubernetes-upgrade-406137
user: kubernetes-upgrade-406137
name: kubernetes-upgrade-406137
- context:
cluster: missing-upgrade-231159
user: missing-upgrade-231159
name: missing-upgrade-231159
current-context: force-systemd-env-794552
kind: Config
preferences: {}
users:
- name: force-systemd-env-794552
user:
client-certificate: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/force-systemd-env-794552/client.crt
client-key: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/force-systemd-env-794552/client.key
- name: kubernetes-upgrade-406137
user:
client-certificate: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/kubernetes-upgrade-406137/client.crt
client-key: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/kubernetes-upgrade-406137/client.key
- name: missing-upgrade-231159
user:
client-certificate: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/missing-upgrade-231159/client.crt
client-key: /home/jenkins/minikube-integration/17206-4554/.minikube/profiles/missing-upgrade-231159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-445585

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-445585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-445585"

                                                
                                                
----------------------- debugLogs end: cilium-445585 [took: 4.4917902s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-445585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-445585
--- SKIP: TestNetworkPlugins/group/cilium (4.74s)

                                                
                                    
Copied to clipboard